Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
PRABHU.S
COMPUTER GRAPHICS
CS2401
STUDY MATERIAL
S.PRABHU
ASSISTANT PROFESSOR
2
PRABHU.S
INDEX
SN
TOPIC
Page
Syllabus
OUTPUT PRIMITIVES
3D CONCEPTS
29
GRAPHICS PROGRAMMING
127
RENDERING
159
FRACTALS
182
QUESTION BANK
212
ROAD MAP
254
DIAGRAMS
271
3
PRABHU.S
UNIT I 2D PRIMITIVES
output primitives Line, Circle and Ellipse drawing algorithms - Attributes of output
primitives Two dimensional Geometric transformation - Two dimensional viewing
Line, Polygon, Curve and Text clipping algorithms.
UNIT II 3D CONCEPTS
Parallel and Perspective projections - Three dimensional object representation
Polygons, Curved lines, Splines, Quadric Surfaces,- Visualization of data sets - 3D
transformations Viewing -Visible surface identification.
UNIT III GRAPHICS PROGRAMMING
Color Models RGB, YIQ, CMY, HSV Animations General Computer Animation,
Raster, Keyframe - Graphics programming using OPENGL Basic graphics primitives
Drawing three dimensional objects - Drawing three dimensional scenes
UNIT IV RENDERING
Introduction to Shading models Flat and Smooth shading Adding texture to faces
Adding shadows of objects Building a camera in a program Creating shaded objects
Rendering texture Drawing Shadows.
UNIT V FRACTALS
Fractals and Self similarity Peano curves Creating image by iterated functions
Mandelbrot sets Julia Sets Random Fractals Overview of Ray Tracing
Intersecting rays with other primitives texture Reflections and
Transparency Boolean operations on Objects
TEXT BOOKS:
1. Donald Hearn, Pauline Baker, Computer Graphics C Version, second edition,
Pearson Education,2004.
2. F.S. Hill, Computer Graphics using OPENGL, Second edition, Pearson Education,
2003.
4
PRABHU.S
UNIT I
OUTPUT PRIMITIVES
The picture can be described in several ways.
Picture may specify by the set of pixels in raster display.
Or we can describe the picture as a set of complex objects.
Such as trees and terrain or furniture and wall.
Output Primitives
Graphics programming packages provide functions to describe a scene in terms of
these basic geometric structures, referred to as output primitives.
To group the sets of output primitives into more complex structures.
Each output primitive is specified with input coordinate data and other information
about the way that object is to be displayed.
5
PRABHU.S
POINTS_LINES_INTRODUCTION
Shapes and colors of the objects can be described internally with pixel arrays or
with sets of basic geometric structures.
Such as
straight line segments and
polygon color areas.
The scene is then displayed either by loading the pixel arrays into the frame buffer
or by scan converting the basic geometric-structure specifications into pixel
patterns.
Typically, graphics programming packages provide functions to describe a scene
in terms of these basic geometric structures, referred to as output primitives,
And to group sets of output primitives into more complex structures.
Each output primitive is specified with input coordinate data and other information
about the way that object is to be displayed.
Points and straight line segments are the simplest geometric components of
pictures.
Additional output primitives that can be used to construct a picture include
circles and other conic sections,
quadric surfaces,
spline curves and surfaces,
polygon color areas, and
character strings.
6
PRABHU.S
POINTS
Point plotting is accomplished by converting a single coordinate position furnished
by an application program into appropriate operations for the output device in use.
With a CRT monitor, for example, the electron beam is turned on to illuminate the
screen phosphor at the selected location.
How the electron beam is positioned depends on the display technology.
Random-scan system or Vector System
It stores point-plotting instructions in the display list, and coordinate values in
these instructions are converted to deflection voltages that position the electron
beam at the screen locations to be plotted during each refresh cycle.
RGB system,
The frame buffer is loaded with the codes for the intensities that are to be
displayed at the screen pixel positions.
7
PRABHU.S
LINES
Line drawing is accomplished by calculating intermediate positions along the line
path between two specified endpoint positions.
An output device is then directed to fill in these positions between the endpoints.
Analog Display Devices
For analog devices, such as a vector pen plotter or a random-scan display, a
straight line can be drawn smoothly from one endpoint to the other.
Linearly varying horizontal and vertical deflection voltages are generated that are
proportional to the required changes in the x and y directions to produce the
smooth line.
8
PRABHU.S
9
PRABHU.S
LINE-DRAWING ALGORITHMS
The Cartesian slope-intercept equation for a straight line is
y= m.x + b
we can determine values for the slope m and y intercept b with the following
calculations:
2
Algorithms for displaying straight lines are based on the line equation 1 and the
calculations given in Eqn. 2 and 3.
For any given x interval x along a line, we can compute the corresponding y
interval y from Eqn 2
10
PRABHU.S
These equations form the basis for determining deflection voltages in analog
devices.
For lines with slope magnitudes | m | < 1, x can be set proportional to a small
horizontal deflection voltage and the corresponding vertical deflection is then set
proportional to y as calculated from Eqn 4.
For lines whose slopes have magnitudes | m | > 1, y can be set proportional to a
small vertical deflection voltage with the corresponding horizontal deflection
voltage set proportional to x, calculated from Eqn 5.
For lines with m = 1, x = y and the horizontal and vertical deflections voltages
are equal.
In each case, a smooth line with slope m is generated between the specified
endpoints.
On raster systems, lines are plotted with pixels, and step sizes in the horizontal and
vertical directions are constrained by pixel separations.
That is, we must "sample" a line at discrete positions and determine the nearest
pixel to the line at each sampled position.
This scan conversion process for straight lines is shown as
11
PRABHU.S
Above a near horizontal line with discrete sample positions along the x axis.
DDA Algorithm
The digital differential analyzer (DDA) is a scan-conversion line algorithm based
on calculating either y or x.
We sample the line at unit intervals in one coordinate and determine corresponding
integer values nearest the line path for the other coordinate.
Consider first a line with positive slope, as shown in Fig.
Subscript k takes integer values starting from 1, for the first point, and increases
12
PRABHU.S
If this processing is reversed, so that the starting endpoint is at the right, then either
we have x = - 1 and y = -1
When the start endpoint is at the right (for the same slope), we set x = -1.
Similarly, when the absolute value of a negative slope is greater than 1, we use
y = -1.
Advantages
The DDA algorithm is a faster method for calculating pixel positions than older
methods.
It eliminates the multiplication, so that appropriate increments are applied in the x
or y direction to step to pixel positions along the line path.
13
PRABHU.S
14
PRABHU.S
Assuming we have determined that the pixel at (xk, yk) is to be displayed, we next
need to decide which pixel to plot in column xk+1.
Our choices are the pixels at positions (xk+1,yk) and (xk+1, yk+1).
They coordinate on the mathematical line at pixel column position xk + l is
calculated as
Then
And
15
PRABHU.S
at the starting pixel position (xo, yo) and with m evaluated as y/x
16
PRABHU.S
ALGORITHM
1. Input the two line endpoints and store the first end point in (x0,y0).
2. Load (x0, y0) into the frame buffer, (i.e) plot the first point.
3. Calculate constants x, y, 2y and 2y-2x and obtain the starting value
for the decision parameters as
P=2y-x.
4. At each x, along the line, starting at k=0 per turn the following test.
If pk>0, the point to plot is (xk+1, yk) and
Pk+1=pk+2y
Otherwise the next point to plot is (xk+1,yk+1)
Pk+1= pk+2y-2x.
5. Repeat step 4 x times.
17
PRABHU.S
18
PRABHU.S
CIRCLE-DRAWING ALGORITHMS
A circle is defined as the set of points that are all at a given distance r from a
center position (x, y).
Another way to eliminate the unequal spacing shown in the above figure is to
calculate points along the circular boundary using polar coordinates r and .
Expressing the circle equation in parametric polar form yields the pair of equations
19
PRABHU.S
20
PRABHU.S
Thus, the circle function is the decision parameter in the midpoint algorithm, and
we can set up incremental calculations for this function as we did in the line
algorithm.
The above figure shows the midpoint between the two candidate pixels at
Sampling position xk + 1.
Our decision parameter is the circle function (equation 3 ) evaluated at the
midpoint between these two pixels:
21
PRABHU.S
Evaluation of the terms 2xk+1 and 2yk+1 can also be done incrementally as
22
PRABHU.S
ALGORITHM
1. Input radius r and circle center (xc, yc) and obtain the first point on the
circumference of a circle centered on the origin as
(x0,y0)=(0,r).
2. Calculate the initial value of the decision parameters as
p0=(5/4)-r.
3. At each xk position starting at k=0 perform the following test.
If pk<0, the next point along the circle centered on (0, 0) is (xk+1, yk) and
pk+1=pk+2xk+1+1
Otherwise, the next point along the circle is (xk+1, yk-1) and
pk+1=pk+2xk+1+1-2yk+1
Where 2xk+1=2xk+2 and 2yk+1=2yk+2
4. Determine symmetry in the other seven octants.
5. Move each calculated pixel position (x.y) onto the circular path centered on (x0, y0)
and plot the coordinate values
x=x+xc, y=y+yc
6. Repeat the step 3 through 5 until x>=y.
23
PRABHU.S
Example:
24
PRABHU.S
ELLIPSE-DRAWING ALGORITHMS
An ellipse is an elongated circle.
Therefore, elliptical curves can be generated by modifying circle-drawing
procedures to take into account the different dimensions of an ellipse along the
major and minor axes.
Properties of Ellipses
An ellipse is defined as the set of points such that the sum of the distances from
two fixted positions (foci) is the same for all points.
If the distances to the two foci from any point P = (x, y) on the ellipse are labeled
dl and d2, then the general equation of an ellipse can be stated as
25
PRABHU.S
Major Axes
The major axis is the straight line segment extending from one side of the ellipse
to the other through the foci.
Minor Axes
The minor axis spans the shorter dimension of the ellipse, bisecting the major axis
at the halfway position (ellipse center) between the two foci.
Polar coordinate
Using polar coordinates r and 0. we can also describe the ellipse in standard
position with the parametric equations:
Symmetry considerations
Symmetry considerations can be used to further reduce computations.
An ellipse in standard position is symmetric between quadrants, but unlike a circle,
it is not symmetric between the two octants of a quadrant.
Thus, we must calculate pixel positions along the elliptical arc throughout one
quadrant, then we obtain positions in the remaining three quadrants by symmetry
as in the diagram.
26
PRABHU.S
27
PRABHU.S
Thus, the ellipse function fellipse(x, y) serves as the decision parameter in the midpoint algorithm.
At each sampling position, we select the next pixel along the ellipse path according
to the sign of the ellipse function evaluated at the midpoint between the two
candidate pixels.
The ellipse slope is calculated from Eqn 1 as
Following figure shows the midpoint between the two candidate pixels at sampling
position xk + 1 in the first region.
28
PRABHU.S
Assuming position (xk, yk) has been selected at the previous step, we determine
the next position along the ellipse path by evaluating the decision parameter at this
midpoint:
29
PRABHU.S
Over region 2, we sample at unit steps in the negative y direction, and the midpoint
is now taken between horizontal pixels at each step.
30
PRABHU.S
When we enter region 2, the initial position (x0 , y0) is taken as the last position
selected in region 1 and the initial derision parameter in region 2 is then
ALGORITHM
1. Input radius rx, ry and ellipse center (xc, yc) and obtain the first point on the
circumference of a ellipse centered on the origin as
(x0,y0)=(0,r).
2. Calculate the initial value of the decision parameters in region 1 as
p0=ry2+rx2ry+1/4 rx2.
3. At each xk position in region 1, starting at k=0 perform the following test.
31
PRABHU.S
If pk<0, the next point along the circle centered on (0, 0) is (xk+1, yk) and
P1k+1=p1k+2 ry2xk+1+ ry2
Otherwise, the next point along the circle is (xk+1, yk-1) and
pk+1=pk+2 ry2xk+1-2 rx2yk+1+ ry2
With
2 ry2xk+1=2 ry2xk+2 and 2 rx2yk+1=2 rx2yk+2
And continue until 2 ry2 x>=2 rx2 y
4. Calculate the initial value of the decision parameters in region 2 using the last
point (x0,y0) calculated in region 1 as
P20= ry2 (x0+1/2)2 + rx2 (y0-1)2- rx2 ry2
5. At each yk position in region 2, starting at k=0, perform the following test.
If p2k>0, the next point along the circle centered on (0, 0) is (xk, yk-1) and
P2k+1=p2k-2 rx2yk+1+ rx2
Otherwise, the next point along the circle is (xk+1, yk-1) and
P2k+1=p2k +2 ry2xk+1-2 rx2yk+1+ rx2
Using the same incremental calculations for x and y as in region 1.
6. Determine symmetry point in the other three quadrants.
7. Move each calculated pixel position(x,y) onto the elliptical path centered on (xc,
yc) and plot the coordinate values
x=x+xc, y=y+yc
32
PRABHU.S
33
PRABHU.S
A plot of the selected positions around the ellipse boundary within the first
quadrant is shown in Fig. 3-23.
color and
size
Which determine the fundamental characteristics of a primitive.
Others specify how the primitive is to be displayed under special conditions.
For example, lines can be
dotted
or dashed,
fat or thin, and
blue or orange.
34
PRABHU.S
LINE ATTRIBUTES
Basic attributes of a straight line segment are its
type,
its width, and
its color.
In some graphics packages, lines can also be displayed using selected pen or brush
options
Line Type
Possible selections for the line-type attribute include
solid lines,
dashed lines,
and dotted lines
We modify a line drawing algorithm to generate such lines by setting the length
and spacing of displayed solid sections along the line path.
35
PRABHU.S
Solid Line
Dotted Line
Dashed Line
Dash-Dotted Line
Line Width
Implementation of line-width options depends on the capabilities of the output
device.
A heavy line on video monitor could bc displayed as adjacent parallel lines.
Where as a pen plotter mght require pen changes.
Line-width command is used to set the current line-width value in the attribute list.
This value is then used by line-drawing algorithms to Control the thIckness of
lines
We set the line-wdth attribute with the command:
SetLinewidthScaleFactor(lw);
36
PRABHU.S
shape,
size, and
pattern.
Some possible pen or brush shapes are given in folwing figure.
37
PRABHU.S
Line Color
When a system provides color (or intensity) options, a parameter giving the current
color index is included in the list of system-attribute values.
A polyline routine displays a line in the current color by setting this color value in
the frame buffer at pixel locations along the line path using the setpixel procedure.
The number of color choices depends on the number of bits available per pixel in
the frame buffer.
The function is
SetPolylineColorIndex(lc)
lc represents integer values represents the color parameter.
38
PRABHU.S
CURVE ATTRIBUTES
Parameters for curve attributes are the same as those for line segments.
We can display curves with varying colors, widths, dot-dash patterns, and
available pen or brush options.
Methods for adapting curve-drawing algorithms to accommodate attribute
selections are similar to those for line drawing.
Method for displaying thick curves is to fill in the area between two parallel curve
paths, whose separation distance is equal to the desired width.
39
PRABHU.S
A minimum number of colors can be provided in this scheme with 3 bits of storage
per pixel, as shown in Table.
Each of the three bit positions is used to control the intensity level (either on or
off) of the corresponding electron gun in an RGB monitor.
40
PRABHU.S
Color tables are an alternate means for providing extended color capabilities,
without requiring large frame buffers.
In particular, often use color tables to reduce frame-buffer storage requirements.
Grayscale
With monitors that have no color capability, color functions can be used in an
application program to set the shades of gray, or grayscale, for displayed
primitives.
Numeric values over the range from 0 to 1 can be used to specify grayscale levels,
which are then converted to appropriate binary codes for storage in the raster.
This allows the intensity settings to be easily adapted to systems with differing
grayscale capabilities.
Lists the specifications for intensity codes for a four-level grayscale system.
In this example, any intensity input value near 0.33 would be stored as the binary
value 01 in the frame buffer, and pixels with this value would be displayed as dark
gray.
With 3 bits per pixel, we can accommodate 8 gray levels;
With 8 bits per pixel would give us 256 shades of gray.
41
PRABHU.S
AREA-FILL ATTRIBUTES
Options for filling a defined region include a choice between a solid color or a
patterned fill.
These fill options can be applied to polygon regions or to areas defined with
curved boundaries.
In addition, areas can be painted using various brush styles, colors, and
transparency parameters.
Fill Styles
Areas are displayed with three basic fill styles:
Another value for fill style is hatch, which is used to fill an area with selected
hatching patterns-parallel lines or crossed lines.
42
PRABHU.S
CHARACTER ATTRIBUTES
The appearance of displayed characters is controlled by attributes such as
font,
size,
color, and orientation.
Attributes can be set both for entire character strings (text) and for individual
characters defined as marker symbols.
Text Attribute
There are a great many text options that can be made available to graphics
programmers.
First of all, there is the choice of font (or typeface).
which is a set of characters with a particular design style such as
Arial,
Courier,
Impact,
TimesNewRoman, and various special symbol groups.
The characters in a selected font can also be displayed with assorted underlining
styles
Bold face
Underline
43
PRABHU.S
Italics
The corresponding function for setting font is
SetTextFont();
Color settings for ,displayed text are stored m the system attribute list.
SetTextColorIndex(tc)
Where tc specifies the color code.
We can adjust text size by scaling the overall dimensions (height and width) of
characters or by scaling only the character width.
44
PRABHU.S
2D TRANSFORMATION
The basic geometric transformations are
translation,
rotation, and
scaling.
Other transformations that are often applied to objects include
reflection and
shear.
Translation
A translation is applied to an object by repositioning it along a straight-line path
from one coordinate location to another.
We translate a two-dimensional point by adding translation distances, tx and ty, to
the original coordinate position (x, y) to move the point to a new position ( x ' , y').
The translation distance pair (tx,ty) is called a translation vector or shift vector.
45
PRABHU.S
46
PRABHU.S
ROTATION
A two-dimensional rotation is applied to an object by repositioning it along a
circular path in the xy plane.
To generate a rotation, we specify a rotation angle and the position (x1,y1) of the
rotation point (or pivot point) about which the object is to be rotated.
47
PRABHU.S
Positive values for the rotation angle define counterclockwise rotations about the
pivot point, as in Fig, and negative values rotate objects in the clockwise direction.
This transformation can also be described as a rotation about a rotation axis that is
perpendicular to the xy plane and passes through the pivot point.
We first determine the transformation equations for rotation of a point position P
when the pivot point is at the coordinate origin.
The angular and coordinate relationships of the original and transformed point
positions are shown in Fig.
48
PRABHU.S
In this figure, r is the constant distance of the point from the origin, angle is the
original angular position of the point from the horizontal, and is the rotation
angle.
Using standard trigonometric identities, we can express the transformed
coordinates in terms of angles and as
Substitute equation 2 in 1,
3
49
PRABHU.S
SCALING
A scaling transformation alters the size of an object.
This operation can be carried out for polygons by multiplying the coordinate
values (x, y) of each vertex by scaling factors sx, and sy, to produce the
transformed coordinates (x', y'):
Scaling factor sx, scales objects in the x direction, while sy scales in the y direction.
The transformation equations 5 can also be written in the matrix form:
Or
50
PRABHU.S
Following figure shows the changing a square (a) into a rectangle (b) with scaling
factors sx, = 2 and sy =1.
Following figure illustrates scaling a line by assigning the value 0.5 to both sx and
sy, in Eqn 6.
Both the line length and the distance from the origin are reduced by a factor of 1 /2
51
PRABHU.S
COMPOSITE TRANSFORMATIONS
With the matrix representations of the previous section, we can set up a matrix for
any sequence of transformations as a composite transformation matrix by
calculating the matrix product of the individual transformations.
Forming products of transformation matrices is often referred to as a
concatenation, or composition, of matrices.
52
PRABHU.S
HOMOGENEOUS COORDINATES
The term homogeneous is used in mathematics to refer to the effect of this
representation on Cartesian equations.
when a Cartesian point (x, y) is converted to a homogeneous representation (xh, yh,
h), equations containing x and y, such as f(x, y) = 0.
Expressing positions in homogeneous coordinates allows us to represent all
geometric transformation equations as matrix multiplications.
Coordinates are represented with three-element column vectors, and
transformation operations are written as 3 by 3 matrices.
For translation, we have
Or as
53
PRABHU.S
Or as
OTHER TRANSFORMATIONS
REFLECTION
A reflection is a transformation that produces a mirror image of an object.
The mirror image for a two-dimensional reflection is generated relative to an axis
of reflection by rotating the object 180" about the reflection axis.
54
PRABHU.S
This transformation keeps x values the same, but "flips" the y values of coordinate
positions.
The resulting orientation of an object after it has been reflected about the x axis is
shown in Fig.
55
PRABHU.S
A reflection about the y axis flips x coordinates while keeping y coordinates the
same.
The matrix for this transformation is
Following illustrates the change in position of an object that has been reflected
about the line x = 0,
We flip both the x and y coordinates of a point by reflecting relative to an axis that
is perpendicular to the xy plane and that passes through the coordinate origin.
This transformation, referred to as a reflection relative to the coordinate origin, has
the matrix representation:
56
PRABHU.S
SHEAR
A transformation that distorts the shape of an object such that the transformed
shape appears as if the object were composed of internal layers that had been
caused to slide over each other is called a shear.
Two common shearing transformations are those that shift coordinate x values and
those that shift y values.
An x-direction shear relative to the x axis is produced with the transformation
matrix
57
PRABHU.S
A y-direction shear relative to the line x = x,,+ is generated with the transformation matrix
58
PRABHU.S
TRANSFORMATION FUNCTIONS
Separate functions are convenient for simple transformation operations, and a
composite function can provide method for specifying complex transformation
sequences.
Individual commands for generating the basic transformation matrices are
translate (trans-ateVector, matrixTranslate)
rotate (theta, matrixRotate)
scale (scaleVector, matrixScale)
composeMatrix (matrix2, matrix1, matrixOut)
59
PRABHU.S
Following figure illustrates the mapping of a picture section that falls within a
rectangular window onto a designated rectangular viewport.
60
PRABHU.S
Viewing-Transformation
Some graphics packages that provide window and viewport operations allow only
standard rectangles.
But a more general approach is to allow the rectangular window to haw any
orientation.
In this case, we carry out the viewing transformation in several steps, as indicated
in Fig.
First, we construct the scene in world coordinates using the output primitives and
attributes.
61
PRABHU.S
Next. to obtain a particular orientation for the window, we can set up a twodimensional viewing-coordinate system in the world-coordinate plane, and define
a window in the viewing-coordinate system.
The viewing coordinate reference frame is used to provide a method for setting up
arbitrary
Orientations for rectangular windows.
Once the viewing reference frame is established, we can transform descriptions in
world coordinates to viewing coordinates.
We then define a viewport in normalized coordinates (in the range from 0 to 1 )
and map the viewing-coordinate description of the scene to normalized
coordinates.
At the final step, all parts of the picture that he outside the viewport are clipped,
and the contents of the viewport are transferred to device coordinates.
Following figure i1lustratt.s a rotated viewing-coordinate reference frame and the
mapping to normalized coordinates.
62
PRABHU.S
A point at position (xw, yw) in the window is mapped into position (xv, yv) in the
associated viewport.
To maintain the same relative placement in the viewport as in the window, we
require that
63
PRABHU.S
Solving these expressions for the viewport position (xv, yv), we have
Above equations can also be derived with a set of transformations that converts the
window area into the viewport area.
This conversion is performed with the following sequence of transformations:
1. Perform a scaling transformation using a fixed-point position of (xw,yw) that
scales the
64
PRABHU.S
CLIPPING OPERATIONS
Generally, any procedure that identifies those portions of a picture that are either
inside or outside of a specified region of space is referred to as a clipping
algorithm, or simply clipping.
The region against which an object is to clipped is called a clip window.
For the viewing transformation, we want to display only those picture parts that are
within the window area.
Everything outside the window is discarded.
Clipping algorithms can be applied in world coordinates, so that only the contents
of the window interior are mapped to device coordinates.
Alternatively, the complete world-coordinate picture can be mapped first to device
coordinates, or normalized device coordinates, then clipped against the viewport
boundaries.
65
PRABHU.S
LINE CLIPPING
Following figure illustrates possible relationships between line positions and a
standard
rectangular clipping region.
66
PRABHU.S
All other lines cross one or more clipping boundaries, and may require calculation of
multiple intersection points.
To minimize calculations, we try to devise clipping algorithms that can efficiently
identify outside lines and reduce intersection calculations.
For a line segment with endpoints (x1, y1) and (x2, y2) and one or both endpoints
outside the clipping rectangle.
x = x1 + u(x2 - x1)
y = y1 + u(y2 - y1)
0u1
67
PRABHU.S
Each bit position in the region code is used to indicate one of the four relative
coordinate
positions of the point with respect to the clip window:
to the left,
right,
top, or
bottom.
By numbering the bit positions in the region code as 1 through 4 from right to left,
the co ordinate regions can be correlated with the bit positions as
68
PRABHU.S
bit 1: left
bit 2: right
bit 3: below
bit 4: above
A value of 1 in any bit position indicates that the point is in that relative position;
otherwise, the bit position is set to 0.
If a point is within the clipping rectangle, the region code is 0000.
A point that is below and to the left of the rectangle has a region code of 0101.
Bit values in the region code are determined by comparing endpoint coordinate
values (x, y) to the clip boundaries.
Bit 1 is set to 1 if xwmin .
The other three bit values can be determined using similar comparisons.
For languages in which bit manipulation is possible, region-code bit values can be
determined
with the following two steps:
(1) Calculate differences between endpoint coordinates and clipping boundaries.
(2) Use the resultant sign bit of each difference calculation to set the corresponding
value in
the region code.
Bit 1 is the sign bit of x-xwmin;
Bit 2 is the sign bit of xwmax-x;
bit 3 is the sign bit of y-ywmin;
bit 4 is the sign bit of ywmax-y;
Once we have established region codes for all line endpoints, we can quickly
determine which lines are completely inside the clip window and which are clearly
outside.
69
PRABHU.S
Any lines that are completely contained within the window boundaries have a
region code of 0000 for both endpoints, and we accept these lines.
Any lines that have a 1 in the same bit position in the region codes for each
endpoint are completely outside the clipping rectangle, and we reject these lines.
We would discard the line that has a region code of 1001 for one endpoint and a
code of 0101 for the other endpoint.
Both endpoints of this line are left of the clipping rectangle, as indicated by the 1
in the first bit position of each region code.
A method that can be used to test lines for total clipping is to perform the logical
and operation with both region codes.
If the result is not 0000, the line is completely outside the clipping region.
Lines that cannot be identified as completely inside or completely outside a clip
window by these tests are checked for intersection with the window boundaries.
As shown in figure, such lines may or may not cross into the window interior.
70
PRABHU.S
POLYGON CLIPPING
To clip polygons, we need to modify the line-clipping procedures.
A polygon boundary processed with a line clipper may be displayed as a series of
unconnected line segments depending on the orientation of the polygon to the
clipping window.
71
PRABHU.S
For polygon clipping, we require an algorithm that will generate one or more
closed areas that are then scan converted for the appropriate area fill.
The output of a polygon clipper should be a sequence of vertices that defines the
clipped polygon boundaries.
72
PRABHU.S
At each step, a new sequence of output vertices is generated and passed to the next
window boundary clipper.
There are four possible cases when processing vertices in sequence around the
perimeter of a polygon.
As each pair of adjacent polygon vertices is passed to a window boundary clipper,
we make the following tests:
1. If the first vertex is outside the window boundary and the second vertex is
inside, both the intersection point of the polygon edge with the window
boundary and the second vertex are added to the output vertex list.
2. If both input vertices are inside the window boundary, only the second vertex is
added to the output vertex list.
3. If the first vertex is inside the window boundary and the second vertex is
outside, only the edge intersection with the window boundary is added to the
output vertex list.
4. If both input vertices are outside the window boundary, nothing is added to the
output list.
These four cases are illustrated in following figure for successive pairs of polygon
vertices.
73
PRABHU.S
Once all vertices have been processed for one clip window boundary, the output
11st of vertices is clipped against the next window boundary.
We illustrate this method by processing the area in following figure against the left
window boundary.
74
PRABHU.S
CURVE CLIPPING
Areas with curved boundaries can be clipped with methods similar to those
discussed in the line clipping.
Curve-clipping procedures will involve nonlinear equations, however, and this
requires more processing than for objects with linear boundaries.
The bounding rectangle for a circle or other curved object can be used first to test
for overlap with a rectangular clip window.
If the bounding rectangle for the object is completely inside the window, we save
the object.
If the rectangle is determined to be completely outside the window, we discard the
object.
In either case, there is no further computation necessary.
But if the bounding rectangle test fails, we can look for other computation saving
approaches.
For a circle, we can use the coordinate extents of individual quadrants and then
octants for preliminary testing before calculating curve-window intersections.
For an ellipse, we can test the coordinate extents of individual quadrants.
Following figure illustrates circle clipping against a rectangular window.
75
PRABHU.S
Similar procedures can be applied when clipping a curved object against a general
polygon clip region.
On the first pass, we can clip the bounding rectangle of the object against the
bounding rectangle of the clip region.
If the two regions overlap, we will need to solve the simultaneous line-curve
equations to obtain the clipping intersection points.
TEXT CLIPPING
There are several techniques that can be used to provide text clipping in a graphics
package.
The clipping technique used will depend on the methods used to generate characters
and the requirements of a particular application.
The simplest method for processing character strings relative to a window boundary is
to use the all-or-none string-clipping strategy shown in Fig.
76
PRABHU.S
The boundary positions of the rectangle are then compared to the window
boundaries, and the string is rejected if there is any overlap.
This method produces the fastest text clipping.
An alternative to rejecting an entire character string that overlaps a window
boundary is to use the all-or-none character-clipping strategy.
Here we discard only those characters that are not completely inside the window.
In this case, the boundary limits of individual characters are compared to the window.
Any character that either overlaps or is outside a window boundary is clipped.
A final method for handling text clipping is to clip the components of individual
characters.
We now treat characters in much the same way that we treated lines.
If an individual character overlaps a clip window boundary, we clip off the parts of
the character that are outside the window.
77
PRABHU.S
Outline character fonts formed with line segments can be processed in this way
using a line clipping algorithm.
Characters defined with bit maps would be clipped by comparing the relative
position of the individual pixels in the character grid patterns to the clipping
boundaries.
EXTERIOR CLIPPING
we have considered only procedures for clipping a picture to the interior of a region
by eliminating everything outside the clipping region.
What is saved by these procedures is inside the region.
In some cases, we want to do the reverse, that is, we want to clip a picture to the
exterior of a specified region.
The picture parts to be saved are those that are outside the region.
This is referred to as exterior clipping.
A typical example of the application of exterior clipping is in multiple window
systems.
To correctly display the screen windows, we often need to apply both internal and
external clipping.
Following figure illustrates a multiple window display.
78
PRABHU.S
79
PRABHU.S
UNIT II
3D CONCEPTS
To obtain a display of a three-dimensional scene that has been modeled in world
coordinates.
we must first set up a coordinate reference for the "camera".
This coordinate reference defines the position and orientation for the plane of the
camera film.
Which is the plane we want to use to display a view of the objects in the scene?
Object descriptions are then transferred to the camera reference coordinates and
projected onto the selected display plane.
We can then display the objects in wireframe (outline) form, as in Fig,
80
PRABHU.S
Parallel Projection
One method for generating a view of a solid object is to project points on the
object surface along parallel lines onto the display plane.
By selecting different viewing positions, we can project visible points on the
object onto the display plane to obtain different two
two-dimensional
dimensional views of the
object, as in Fig.
81
PRABHU.S
Perspective Projection
Perspective : The appearance of things relative to one another as determined by their
distance from the viewer
82
PRABHU.S
Parallel lines appear to converge to a distant point in the background, and distant objects
appear smaller than objects closer to the viewing position.
Depth Cueing
Depth information is important so that we can easily identify, for a particular
viewing direction, which is the front and which is the back of displayed objects.
Following figure illustrates the ambiguity that can result when a wireframe object
is displayed without depth information.
83
PRABHU.S
The lines closest to the viewing position are displayed with the highest intensities,
and lines farther away are displayed with decreasing intensities.
PROJECTIONS
Once world-coordinate descriptions of the objects in a scene are converted to
viewing coordinates, we can project the three-dimensional objects onto the two
dimensional view plane.
There are two basic projection methods.
Parallel Projection
Perspective Projection
Parallel Projection
In a parallel projection, coordinate positions are transformed to the view plane
along parallel Line.
84
PRABHU.S
85
PRABHU.S
86
PRABHU.S
Front, side, and rear orthographic projections of an object are called elevations.
And a top orthographic projection is called plan view.
Engineering and architectural drawings commonly employ these orthographic
projections, because lengths and angles are accurately depicted and can be
measured from the drawings.
Perspective Projection
For a perspective projection, object positions are transformed to the view plane
along lines that converge to a point called the projection reference point (or center
of projection).
A perspective projection, on the other hand, produces realistic views but does not
preserve relative proportions.
Projections of distant objects are smaller than the projections of objects of the
same size that are closer to the projection plane
87
PRABHU.S
88
PRABHU.S
89
PRABHU.S
3D REPRESENTATION
Representation schemes for solid objects are often divided into two broad
categories,
1. Boundary representations
2. Space-partitioning representation
Boundary representations
Boundary representations (B-reps) describe a three-dimensional object as a set of
surfaces that separate the object interior from the environment.
Typical examples of boundary representations are polygon facets and spline patches.
Space-partitioning representation
Space-partitioning representations are used to describe interior properties, by
partitioning the spatial region containing an object into a set of small, non
overlapping, contiguous solids (usually cubes).
A common space-partitioning description for a three-dimensional object is an octree
representation.
POLYGON SURFACES
The most commonly used boundary representation for a three-dimensional
graphics object is a set of surface polygons that enclose the object interior.
Many graphics systems store all object descriptions as sets of surface polygons.
This simplifies and speeds up the surface rendering and display of objects, since all
surfaces are described with linear equations.
For this reason, polygon descriptions are often referred to as "standard graphics
objects."
90
PRABHU.S
In some cases, a polygonal representation is the only one available, but many
packages allow objects to be described with other schemes, such as spline surfaces,
that are then converted to polygonal representations for processing.
A polygon representation for a polyhedron precisely defines the surface features of
the object.
But for other objects, surfaces are tesselated (or tiled) to produce the polygonmesh approximation.
Following figure shows Wireframe representation of a cylinder with back (hidden
lines removed).
91
PRABHU.S
Polygon Tables
We specify a polygon surface with a set of vertex coordinates and associated
attribute
parameters.
An information for each polygon are placed into tables that are to be used in the
subsequent processing, display, and manipulation of the objects in a scene.
Polygon data tables can be organized into two groups:
1. geometric tables and
2. attribute tables.
Geometric tables
It contain vertex coordinates and parameters to identify the spatial orientation of
the polygon surfaces.
Attribute tables
It includes parameters specifying the degree of transparency of the object and its
surface reflectivity and texture characteristics.
A convenient organization for storing geometric data is to create three lists:
1. a vertex table,
2. an edge table, and
3. a polygon table.
Vertex table
Coordinate values for each vertex in the object are stored in the vertex table.
92
PRABHU.S
Edge table
The edge table contains pointers back into the vertex table to identify the vertices
for each polygon edge.
Polygon table
The polygon table contains pointers back into the edge table to identify the edges
for
each polygon.
This scheme is illustrated in Fig for two adjacent polygons on an object surface.
93
PRABHU.S
In addition, individual objects and their component polygon faces can be assigned
object and facet identifiers for easy reference.
Plane Equations
To produce a display of a three-dimensional object, we must process the input data
representation for the object through several procedures.
These processing steps include transformation of the modeling and world-coordinate
descriptions to viewing coordinates, then to device coordinates; identification of visible
surfaces; and the application of surface-rendering procedures.
For some of these processes, we need information about the spatial orientation of
the individual surface components or the object.
This information is obtained from the vertex coordinate values and the equations
that describe the polygon planes.
The equation for a plane surface can be expressed in the form
AX + BY + CZ + D = 0
where (x, y, z ) in any point on the plane, and the coefficients A, B, C, and D are
constants describing the spatial properties of the plane.
We can obtain the values of A, B, C, and D by solving a set of three plane
equations.
To solve the following set of simultaneous linear plane equations for the ratios
A/D, B/D,and C/D:
94
PRABHU.S
The solution for this set of equations can be obtained in determinant form, using
Cramer's rule, as
Expanding the determinants, we can write the calculations for the plane
coefficients in the form
POLYGON MESHES
Some graphics packages provide several polygon functions for modeling objects.
A single plane surface can be specified with a function such as fillArea.
But when object surfaces are to be tiled, it is more convenient to specify the
surface facets with a mesh function.
95
PRABHU.S
Triangle strip
One type of polygon mesh is the triangle strip.
96
PRABHU.S
This can be due to numerical errors or errors in selecting coordinate positions for
the vertices.
Solution
One way to handle this situation is simply to divide the polygons into triangles.
Another approach that is sometimes taken is to approximate the plane parameters
A, B, and C.
We can do this with averaging methods or we can project the polygon onto the
coordinate planes.
Using the projection method, we take
A proportional to the area of the polygon projection on the yz plane,
B proportional to the projection area on the xz plane, and
C proportional to the projection area on the xy plane.
For surfaces, a functional description is often tesselated to produce a polygonmesh approximation to the surface.
97
PRABHU.S
Usually, this is done with triangular polygon patches to ensure that all vertices of
any polygon are in one plane.
Polygons specified with four or more vertices may not have all vertices in a single
plane.
Curve and surface equations can be expressed in either a parametric or a non
parametric form.
QUADRIC SUKFACES
A frequently used class of objects are the quadric surfaces, which are described
with second-degree equations (quadratics).
They include
spheres,
ellipsoids,
tori,
paraboloids, and
hyperboloids.
Quadric surfaces, particularly spheres and ellipsoids, are common elements of
graphics scenes, and they are often available in graphics packages as primitives
from which more complex objects can be constructed.
SPHERE
In Cartesian coordinates, a spherical surface with radius r centered on the
coordinate origin is defined as the set of points (x, y, z) that satisfy the equation
98
PRABHU.S
We can also describe the spherical surface in parametric form, using latitude and
longitude angles.
The above figure shows the Parametric coordinate position (r, ,) on the surface
of a sphere with radius r.
ELLIPSOID
An ellipsoidal surface can be described as an extension of a spherical surface,
where the radii in three mutually perpendicular directions can have different
values.
99
PRABHU.S
The Cartesian representation for points over the surface of an ellipsoid centered on
the origin is
And a parametric representation for the ellipsoid in terms of the latitude angle
and the longitude angle in eqn 2.
SPLINE
A spline is a flexible strip used to produce a smooth curve through a designated set
of points.
Several small weights are distributed along the length of the strip to hold it in
position on the drafting table as the curve is drawn.
The term spline curve originally referred to a curve drawn in this manner.
In computer graphics, the term spline curve refers to any composite curve formed
with polynomial sections satisfying specified continuity conditions at the boundary
of the pieces.
100
PRABHU.S
Splines are used in graphics applications to design curve and surface shapes, to
digitize drawings for computer storage, and to specify the animation paths for the
objects or the camera in a scene.
Typical CAD applications for splines include the design of automobile bodies,
aircraft and spacecraft surfaces, and ship hulls.
The above figure shows the set of six control points interpolated with piecewise
continuous polynomial.
Spline Specifications
There are three equivalent methods for specifying a particular spline
representation:
1. We can state the set of boundary conditions that are imposed on the spline; or
2. We can state the matrix that characterizes the spline; or
3. We can state the set of blending functions (or basis functions) that determine
how specified geometric constraints on the curve are combined to calculate
positions along the curve path.
101
PRABHU.S
102
PRABHU.S
Pseudo-color methods
Pseudo-color methods are also used to distinguish different values in a scalar data
set, and color-coding techniques can be combined with graph and chart methods.
To color code a scalar data set, we choose a range of color and map the range of
data values to the color range.
103
PRABHU.S
For example, blue could be assigned to the lowest scalar value, and red could be
assigned to the highest value.
Following figure gives an example of a color-coded surface plot.
Color coding a data set can be tricky, because some color combinations can lead to
misinterpretations of the data.
Contour plots are used to display isolines (lines of constant scalar value) for a
data set distributed over a surface.
The isolines are spaced at some convenient interval to show the range and
variation of the data values over the region of space.
The isolines are usually plotted as straight-line sections across each cell, as
illustrated in Fig.
104
PRABHU.S
electric current.
One way to visualize a vector field is to plot each data point as a small arrow that
shows the magnitude and direction of the vector.
This method is most often used with cross-sectional slices, as in Fig.
105
PRABHU.S
Magnitudes for the vector values can be shown by varying the lengths of the
arrows, or we can make all arrows the same size, but make the arrows different
colors according to a selected color coding for the vector magnitudes.
We can also represent vector values by plotting field lines or streamlines.
Field lines are commonly used for electric, magnetic, and gravitational fields.
The magnitude of the vector values is indicated by the spacing between field lines, and
the direction is the tangent to the field, as shown in Fig
106
PRABHU.S
107
PRABHU.S
108
PRABHU.S
3D TRANSFORMATION
Methods for geometric transformations and object modeling in three dimensions
are extended from two-dimensional methods by including considerations for the z
coordinate.
TRANSLATION
In a three-dimensional homogeneous coordinate representation, a point is
translated
from position P = (x, y, z) to position P' = (x', y', z') with the matrix operation
Or
109
PRABHU.S
Parameters tx, ty, and tz, specifying translation distances for the coordinate directions x,
y, and z, are assigned any real values.
The matrix representation in Eq.1 is equivalent to the three equations
ROTATION
To generate a rotation transformation for an object, we must designate an axis of
rotation (about which the object is to be rotated) and the amount of angular
rotation.
Unlike two-dimensional applications, where all transformations are carried out in
the xy plane, a three-dimensional rotation can be specified around any line in
space.
Following figures illustrate Positive rotation directions about the coordinate axes
are
counterclockwise, when looking toward the origin from a positive coordinate
position on each axis.
110
PRABHU.S
Coordinate-Axes Rotations
The two-dimensional z-axis rotation equations are easily extended to three
dimensions:
111
PRABHU.S
Transformation equations for rotations about the other two coordinate axes can be obtained
with a cyclic permutation of the coordinate parameters x, y and in Eqs.1.
That is, we use the replacements
3
112
PRABHU.S
SCALING
The matrix expression tor the scaling transformation of a position P = (x, y, z)
relative
to the coordinate origin can be written as
Or
2
Where scaling parameters sx, sy, and sz are assigned any positive values.
Explicit expressions for the coordinate transformations for scaling relative to the
origin are
Scaling an object with transformation Eqn1 changes the size of the object and
repositions the object relative to the coordinate origin.
Also, if the transformation parameters are not all equal, relative dimensions in the
object are changed.
113
PRABHU.S
We preserve the original shape of an object with a uniform scaling (sx =sy = sz).
The result of scaling an object uniformly with each scaling parameter set to 2 is
shown in Fig.
Scaling with respect to a selected fixed position (xf, yf, zf) can be represented with
the following transformation sequence:
114
PRABHU.S
We form the inverse scaling matrix for either Eqn1 or Eqn3 by replacing the
scaling parameters
sx, sy and sz with their reciprocals.
The inverse matrix generates an opposite scaling transformation, so the
concatenation of any
scaling matrix and its inverse produces the identity matrix.
OTHER TRANSFORMATIONS
In addition to translation, rotation, and scaling, there are various additional
transformations
that are often useful in three-dimensional graphics applications.
Two of these are
reflection and
shear.
115
PRABHU.S
REFLECTIONS
A three-dimensional reflection can be performed relative to a selected reflection
axis or with respect to a selected reflection plane.
In general, three-dimensional reflection matrices are set up similarly to those for
two dimensions.
Reflections relative to a given axis are equivalent to 1800 rotations about that axis.
Reflections with respect to a plane are equivalent to 180' rotations in fourdimensional space.
When the reflection plane is a coordinate plane (either xy, xz, or yz), we can think
of the transformation as a conversion between Left-handed and right-handed
systems.
An example of a reflection that converts coordinate specifications from a righthanded system
to a left-handed system (or vice versa) is shown in Fig.
This transformation changes the sign of the z coordinates, leaving the x and ycoordinate
values unchanged.
The matrix representation for this reflection of points relative to the xy plane is
116
PRABHU.S
SHEARS
Shearing transformations can he used to modify object shapes.
They are also useful in three-dimensional viewing for obtaining general projection
transformations.
In two dimensions, we discussed transformations relative to the x or y axes to
produce distortions in the shapes of objects.
In three dimensions, we can also generate shears relative to the z axis.
As an example of three-dimensional shearing. the following transformation
produces a z-axis shear:
117
PRABHU.S
Boundaries of planes that are perpendicular to the z axis are thus shifted by an
amount proportional to z.
An example of the effect of this shearing matrix on a unit cube is shown in Fig, for
shearing values a = b =1.
Shearing matrices for the x axis and y axis are defined similarly.
VIEWING PIPELINE
The steps for computer generation of a view of a three-dimensional scene are
somewhat analogous to the processes involved in taking a photograph.
To take a snapshot, we first need to position the camera at a particular point in
space.
Then we need to decide on the camera orientation (in Fig).
118
PRABHU.S
Finally, when we snap the shutter, the scene is cropped to the size of the "window"
(aperture) of the camera, and light from the visible surfaces is projected onto the
camera film.
Following figure shows the general processing steps for modeling and converting a
world-coordinate description of a scene to device coordinates.
Once the scene has been modeled, world-coordinate positions are converted to
viewing coordinates.
The viewing-coordinate system is used in graphics packages as a reference for
specifying the observer viewing position and the position of the projection plane,
which we can think of in analogy with the camera film plane.
Next, projection operations are performed to convert the viewing-coordinate
description of the scene to coordinate positions on the projection plane, which will
then be mapped to the output device.
119
PRABHU.S
Objects outside the specified viewing limits are clipped h m further consideration,
and the remaining objects are processed through visible-surface identification and
surface-rendering procedures to produce the display within the device viewport.
VIEWING COORDINATES
Generating a view of an object in three dimensions is similar to photographing the
object.
We can walk around and take its picture from any angle, at various distances, and
with varying camera orientations.
Whatever appears in the viewfinder is projected onto the flat film surface.
The type and size of the camera lens determines which parts of the scene appear in
the final picture.
These ideas are incorporated into three dimensional graphics packages so that
views of
a scene can be generated, given the spatial position, orientation, and aperture size
of the "camera".
To obtain a series of views of a scene, we can keep the view reference point fixed
and change the direction of N, as shown in Fig.
120
PRABHU.S
1. Translate the view reference point to the origin of the world-coordinate system.
121
PRABHU.S
2. Apply rotations to align the xv, yv, and zv axes with the world xw, yw, and zw axes,
respectively.
If the view reference point is specified at world position (xo yo, zo), this point is
translated to the world origin with the matrix transformation
122
PRABHU.S
1. Back-face detection
2. Depth-buffer method
3. A-buffer method
4. Scan-line method
5. Depth-sorting method
6. BSP-tree method
7. Area-subdivision b1ethod
8. Octree methods
9. Ray-casting method
10. Curved surfaces
11. wireframe methods
123
PRABHU.S
BACK-FACE DETECTION
A fast and simple object-space method for identifying the back faces of a
polyhedron is based on the "inside-outside" tests.
A point (x, y, z) is "inside" a polygon surface with plane parameters A, B, C, and
D if
When an inside point is along the line of sight to the surface, the polygon must be
a back face (we are inside that face and cannot see the front of it from our viewing
position).
We can simplify this test by considering the normal vector N to a polygon surface,
which has Cartesian components (A, B, C).
In general, if V is a vector in the viewing direction from the eye (or "camera")
position, as shown in Fig,
124
PRABHU.S
DEPTH-BUFFER METHOD
A commonly used image-space approach to detecting visible surfaces is the depthbuffer method.
Which compares surface depths at each pixel position on the projection plane?
This procedure is also referred to as the z-buffer method.
Since object depth is usually measured from the view plane along the z axis of a
viewing system.
Each surface of a scene is processed separately, one point at a time across the
surface.
The method is usually applied to scenes containing only polygon surfaces, because
depth values can be computed very quickly and the method is easy to implement.
But the method can be applied to non planar surfaces.
With object descriptions converted to projection coordinates, each (x, y, z )
position on a polygon surface corresponds to the orthographic projection point (x,
y) on the view plane.
Therefore, for each pixel position (x, y) on the view plane, object depths can be
compared by comparing z values.
Following figure shows three surfaces at varying distances along the orthographic
projection line from position (x, y) in a view plane taken as the xv, yv plane.
Surface S1, is closest at this position, so its surface intensity value at (x, y) is saved
As implied by the name of this method, two buffer areas are required.
A depth buffer is used to store depth values for each (x, y) position as surfaces are
processed, and the refresh buffer stores the intensity values for each position.
125
PRABHU.S
Initially, all positions in the depth buffer are set to 0 (minimum depth), and the
refresh
buffer is initialized to the background intensity.
Each surface listed in the polygon tables is then processed, one scan line at a time,
calculating the depth (z value) at each (x, y) pixel position.
The calculated depth is compared to the value previously stored in the depth buffer
at that position.
If the calculated depth is p a t e r than the value stored in the depth buffer, the new
depth value is stored, and the surface intensity at that position is determined and in
the same xy location in the refresh buffer.
We summarize the steps of a depth-buffer algorithm as follows:
126
PRABHU.S
A-Buffer
An extension of the ideas in the depth-buffer method is the A-buffer method.
The A- buffer method represents an antialiased, area-averaged, accumulationbuffer method.
A drawback of the depth-buffer method is that it can only find one visible surface
at each pixel position.
In other words, it deals only with opaque surfaces and cannot accumulate intensity
values for more than one surface, as is necessary if transparent surfaces are to be
displayed .
The A-buffer method expands the depth buffer so that each position in the buffer
can reference a linked list of surfaces.
Thus, more than one surface intensity can be taken into consideration at each pixel
position, and object edges can be ant aliased.
127
PRABHU.S
UNIT III
COLOR MODELS
A color model is a method for explaining the properties or behavior of color within
some particular context.
Chromaticity diagram:
Chromaticity diagram is a convenient space coordinator representation of all the
colors and the mixture of colors.
128
PRABHU.S
Hue
This is the predominant spectral color of the received light.
The color itself is its hue or tint.
Green leaves have a green hue, red apple has a red hue.
Saturations:
This is the spectral purity of the color light.
Saturated colors are vivid, intense, deep
RGB COLOR MODEL
In this color model, the three primaries Red, Green and Blue are used.
Here color is expressed as,
C = RR + CG + BB
We can represent this model in unit cube as shown in following figure,
129
PRABHU.S
130
PRABHU.S
NTSC signals
An NTSC video signal can be converted to an RGB signal using an NTSC
decoder.
Which separates the video signal into the YIQ components, then converts to RGB
values.
We convert from YIQ space to RGB space with the inverse matrix transformation
RGB into YIQ
An RGB signal can be converted to a television signal using an NTSC encoder
which converts RGB values to YIQ values.
This conversion from RGB values to YIQ values is accomplished with the
transformation.
131
PRABHU.S
Which separates the video signal into the YIQ components, then converts to
RGB values.
We can convert from YIQ space to RGB space with the inverse matrix
transformation.
Subtractive process
It is a subtractive process.
As we have noted, cyan can be formed by adding green and blue light.
Therefore, when white light is reflected from cyan-colored ink, the reflected light
must have no red component.
That is, red light is absorbed, or subtracted, by the ink.
Similarly, magenta ink subtracts the green component from incident light, and
yellow subtracts the blue component.
132
PRABHU.S
133
PRABHU.S
A black dot is included because the combination of cyan, magenta, and yellow
inks typically produce dark gray instead of black.
Where the white is represented in the RGB system as the unit column vector.
Conversion of CMY into RGB
Similarly, we convert from a CMY color representation to an RGB representation
with the matrix transformation
Where black is represented In the CMY system as the unit column vector.
134
PRABHU.S
The boundary of the hexagon represents the various hues, and it is used as the top
of the HSV hexcone.
135
PRABHU.S
136
PRABHU.S
ANIMATION
Computer animation generally refers to any time sequence of visual changes in a
scene.
In addition to changing object position with translations or rotations, a computergenerated animation could display time variations in object size, color,
transparency, or surface texture.
Computer animations can also be generated by changing camera parameters, such
as position, orientation, and focal length.
And we can produce computer animations by changing lighting effects or other
parameters and procedures associated with illumination and rendering.
Storyboard Layout
The storyboard is an outline of the action.
It defines the motion sequence as a set of basic events that are to take place.
Depending on the type of animation to be produced, the storyboard could consist
of a set of rough sketches or it could be a list of the basic ideas for the motion.
137
PRABHU.S
Object Definition
An object definition is given for each participant in the action.
Objects can be defined in terms of basic shapes, such as polygons or splines.
In addition, the associated movements for each object are specified along with the
shape.
Keyframe
A keyframe is a detailed drawing of the scene at a certain time in the animation
sequence.
Within each key frame, each object is positioned according to the time for that
frame.
Some key frames are chosen at extreme positions in the action.
Others are spaced so that the time interval between key frames is not too great.
More key frames are specified for intricate motions than for simple, slowly
varying motions.
138
PRABHU.S
RASTER ANIMATIONS
On raster systems, we can generate real-time animation in limited applications
using raster operations.
Two dimensional rotations in multiples of 90" are also simple to perform,
although we can rotate rectangular blocks of pixels through arbitrary angles using
antialiasing procedures.
To rotate a block of pixels, we need to determine the percent of area coverage for
those pixels that overlap the rotated block.
Sequences of raster operations can be executed to produce real-time animation of
either two-dimensional or three-dimensional objects, as long as we restrict the
animation to motions in the projection plane.
Then no viewing or visible surface algorithms need be invoked.
We can also animate objects along two-dimensional motion paths using the
color -table transformations.
Here we predefine the object at successive positions along the motion path, and set
the successive blocks of pixel values to color-table entries.
We set the pixels at the first position of the object to "on" values, and we set the
pixels at the other object positions to the background color.
The animation is then accomplished by changing the color-table values so that the
object is "on" at successively positions along the animation path as the preceding
position is set to the background intensity (Fig).
139
PRABHU.S
KEY-FRAME SYSTEMS
We generate each set of in-betweens from the specification of two (or more) key
frames.
Motion paths can be given with a kinematic dm-ripti011 as a set of spline curves,
or the motions can be physically based by specifying the force acting on the
objects to be animated.
For complex scenes, we can separate the frames into individual components or
objects called cels (celluloid transparencies), an acronym from cartoon animation.
Given the animation paths, we can interpolate the positions of individual objects
between any two times.
With complex object transformations, the shapes of objects may change over time.
Examples are clothes, facial features, magnified detail, evolving shapes, exploding
or disintegrating objects, and transforming one object into another object.
If all surfaces are described with polygon meshes, then the number of edges per
polygon can change from one frame to the next.
Thus, the total number of line segments can be different in different frames.
MORPHING
Transformation of object shapes from one form to another is called morphing,
Which is a shortened form of metamorphosis.
Morphing methods can he applied to any motion or transition involving a change
in shape.
Given two key frames for an object transformation, we first adjust the object
specification in one of the frames so that the number of polygon edges (or the
number of vertices) is the same for the two frames.
140
PRABHU.S
141
PRABHU.S
OPENGL
(Open Graphics Library)
Advantages:
OPENGL is truly open, vendor, neutral, multiplatform graphics standard.
Stable.
Reliable and portable
Scalable
Easy to use.
Well documented.
142
PRABHU.S
Features:
It supports 3D transformation.
It supports different color model
It supports lighting (flat shading, Gouraud shading, Pong shading)
It supports rendering.
It supports different modeling
It supports other special effects (atmosphere form, -blending, motion blur)
OPNGL OPERATION:
for OPENGL.
GLUT provides a portable API as one can write a single OPENGL program that
143
PRABHU.S
Sample Program:
Void main (int argc, char ** argc)
{
glutInit (&argc, argv);
glutInitDisplay Mode (Glut-Single Glut-RGB);
glutInitWindowsize (640, 480);
glutInitWindowPosition (100, 150);
glutCreateWindow (my first attempt);
glutDisplayFunc (myDisplay);
glutReshapeFunc (myReshape);
glutMouseFunc (myMouse);
glutKeyboardFunc (myKeyboard);
myInit ();
glut mainloop ();
}
144
PRABHU.S
positioned on the screen 100 pixels over from the left edge and 150
pixels down from the top.
glutDisplayFunc (myDisplay):
Whenever the system determines that a window should be redrawn on
the screen.
glutReshapeFunc (myReshape):
Screen windows can be reshaped by the user, usually by dragging a
glutMouseFunc (myMouse):
When one of the mouse button is pressed or released a mouse event as
issued.
My mouse is registered as a function to be called when a mouse event
occurs.
145
PRABHU.S
glutKeyboardFunc (myKeyboard):
This command register the function myKeyboard(): with a event of
146
PRABHU.S
Datatype
Typical
OPENGL TypeName
Char++Type
8 bit integer
Signed char
GLbyte
16 bit integer
short
GLshort
32 bit integer
int or long
Glint, GLsize i
float
double
Gldouble, GLclamped
Ub
unsigned char
GLUbyte, GLUdean
Us
unsigned short
GLUshort
Ui
unsigned int or
GLUnit, GLenum,
unsigned long
GLbitfield.
GLfloat, GLclamp F
The size of a point can be set with glpointsize() which takes one floating point
Where the values of red, green and blue vary between 0.0 and 1.0.
To draw a line between (40, 100) and (202, 9) we use
glBegin (GL_lines);
givertex 2i (40, 100);
glvertex 2i (202, 96);
glEnd ();
147
PRABHU.S
glBegin (GL_LINE_STRIP)
glvertex 2i (20, 10)
148
PRABHU.S
GL_TRIANGLES: Takes the listed vertices three at a time and draws a separate
triangle for each.
GL_QUADS: Takes the vertices four at a time and draws a separate quadrilateral for
each.
Example:
The following code fragment specifies a 3D polygon to be drawn, in this case a
simple square.
Note that in this case the same square could have been drawn using the
149
PRABHU.S
glBegin (GL_POLYGON);
glvertex3fv (p1);
glvertex3fv (p2);
glvertex3fv (p3);
glvertex3fv (p4);
glEnd ();
The eye that is viewing the scene looks along the z-axis at the window
150
PRABHU.S
ii.
Projection matrix
iii.
Viewport matrix
151
PRABHU.S
Projection matrix:
It scales and shifts each vertex n a particular way.
So that all those vertices that inside the view volume will inside a standard cube.
The projection matrix effectively squashes the view volume into the cube centred
at the origin.
The projection matrix also reverse the sense of the z-axis.
So that increasing values of z, increasing values of depth of a point from the eye.
The following figure shows how the block is transformed into a different block.
152
PRABHU.S
Clipping is now performed, which eliminates the portion of the block that lies
outside the standard cube.
Viewport matrix:
Finally viewport matrix maps the surviving portion of the block into a 3D
viewport.
153
PRABHU.S
glTranslatef ()
glRotatef ()
glScalef ()
154
PRABHU.S
gluLookAt ()
glFrustum ()
gluPerspective ()
glortho ()
gluOrtho2D ()
Viewing Transformation
glViewport ()
155
PRABHU.S
Code:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// Viewing Transform
gluLookAt (eyex, eyez, lookAt x, lookAt y, lookAt z, up X, up Y, up Z);
// Model Trnasform
glTrnaslatef (del x, del y, del z);
glRotatef (angle, i, j, k);
glScalef (mult x, mult y, mult z);
156
PRABHU.S
UNIT IV
Black body
If all the incident light is absorbed, the object appears black and is known as black
body.
We focus on the part of the light that is reflected or scattered form the surface.
Some amount of these reflected light travels and reach the eyes, causing the object
to be seen.
There are two types of reflection of incident light.
Diffuse scattering
Specular reflection
157
PRABHU.S
DIFFUSE SCATTERING
It occurs when some of the incident light penetrates the surface slightly and is reradiated uniformly in all directions.
Scattered light interacts strongly with the surface, so its color is usually affected by
the nature of the material out of which the surface is made.
Lamberts Law
The area subtended is now only the fraction cos ().
So the brightness of S is reduced by that same fraction.
158
PRABHU.S
SPECULAR REFLECTION
Real objects do not scatter objects uniformly in all directions.
So specular component is added to the shading model.
Specular reflection causes highlights, which can add significantly to the realism of
a picture when objects are shiny.
159
PRABHU.S
FLAT SHADING
When a face is flat (like the roof of a barn) and the light sources are quite distant.
The diffuse light component varies little over different points on the roof.
In such cases it is reasonable to use the same color for every pixel covered by the
face.
Flat shading is established in OpenGL by using the command.
glShadeModel(GL_FLAT);
The following figure shows a buckyball and sphere rendered by means of flat
shading.
160
PRABHU.S
Because an entire face is filled with a color that was computed at only one vertex.
SMOOTH SHADING
Smooth shading attempts to de-emphasize edges between faces by computing
colors at more points on each face.
The two principle types of smooth shadings are
1. Gouraud Shading
2. Phong Shading
OpenGL does only Gouraud shading.
GOURAUD SHADING
Computationally speaking, Gouraud shading is modestly more expensive than flat
shading.
Gouraud shading is established in OpenGL with the use of the function.
o glShadeModel(GL_SMOOTH);
The following figure shows a buckyball and a sphere rendered by means of
Gouraud shading.
The buckyball looks the same as when it was rendered with flat shading.
161
PRABHU.S
Because the same color is associated with each vertex of a face, so interpolation
changes nothing.
The polygonal surfaces shown in coss section, with vertices V1, V2 etc.
The imaginary smooth surface is suggested as well
Properly computed vertex normals m1, m2 etc, perpendicular to this imaginary
surface, so that normals of correct shading will be used.
162
PRABHU.S
PHONG SHADING
Following figure shows a projected face, with normal vectors m1, m2, m3 and m4
indicated at the four vertices.
For the scanline 1/s the vector mleft and mright are found by linear interpolation.
For instance
163
PRABHU.S
This interpolated vector must be normalized to unit length before it is used in the
shading formula
Once mleft and mright are known, they are interpolated to form a normal vector at
each x along the scan line.
Following fig. Shows an object rendered by using Gouraud Shading and the same
object is rendered by using Phong Shading.
In Phong Shading, the direction of normal vector varies smoothly from point to
point and more closely approximates.
The production of specular highlights is much more faithful than with Gouraud
Shading.
164
PRABHU.S
Because it applied the shading model once per vertex right after the modelview
transformation
Normal vector information is not passed to the rendering stage following the
perspective transformation and division.
Basic function is
texture (s, t)
This function produces a color or intensity value for each value of s and t
between 0 and 1.
165
PRABHU.S
TYPES:
There are numerous sources of textures.
The most common textures are
Bitmap textures
Procedural texture
BITMAP TEXTURES:
Textures are often formed from bit representation of images such as digitized
TEXELS:
Texels formed from bitmap consists of an array, say textr[c][r] of color values
and 0 to r-1respectively.
PROCEDURAL TEXTURE:
Alternatively we can define a texture by mathematical function or procedure
166
PRABHU.S
//sphere intensity
else
return 0.2
//dark background
surface.
Example,
glBegin (GL_QUADS)
glTexCoord2f (0.0, 0.0);
glvertex3f (1.0, 2.5, 1.5);
glTexCoord2f (0.0, 0.6); glvertex3f (1.0, 3.7, 1.5);
glTexCoord2f (0.8, 0.6); glvertex3f (2.0, 3.7, 1.5);
glTexCoord2f (0.8, 0.0); glvertex3f (2.0, 2.5, 1.5);
glEnd
167
PRABHU.S
The above figure shows the common case in which the four corners of the
168
PRABHU.S
The above figure shows the use of texture co-ordinates that tile the texture,
making it repeat.
169
PRABHU.S
Shadows are absent in figure A, so it is impossible to see how far above the
170
PRABHU.S
SHADOW BUFFER:
Different methods for drawing shadow uses a variant of the depth buffer, that
i.
ii.
In order to have fine control ever camera movements, we create and manipulate
our own camera in a program.
We create a camera class that knows how to do all the things a camera does.
Doing this is very simple and the payoff is high.
In a program, we create a camera object called, say cam and adjust it with
functions as following,
cam.set(eye,look,up);
cam.slide(-1,0,-2);
cam.roll(30);
cam.yaw(20);
171
PRABHU.S
.
etc
.
.
The following program shows the basic definition of the camera class.
Class camera
{
private:
point3 eye;
vector3 u,v,n;
double viewAngle, aspect, nearDist, farDist;
void setModeViewMatrix();
public:
camera();
void set(point3 eye, point3 look, vecotr3 up);
void roll(float angle);
void pitch(float angle);
void yaw(float angle);
void slide(float delu, float delv, float deln);
void setshape(float vAng, float asp, float nearD, float farD);
};
172
PRABHU.S
It is used only by member functions of the class and needs to be called after each
change is made to the cameras position.
m[13]=-eVec.dot(v);
m[7]= 0; m[11]= 0;
m[15]= 1.0;
glMatrixMode(GL_MODELVIEW);
glLoadMatrix(m);
i.
ii.
forward or backward
movement along u :
left or right
movement along v :
up or down
173
PRABHU.S
We form two new axes u' and v' that lie in the same plane as u and v.
The functions pictch() and yaw() are implemented in a similar fashion.
174
PRABHU.S
Likewise, the farther apart the lines are, the lighter the area appears.
Light patterns, such as objects having light and shaded areas, help when creating
the illusion of depth on paper.
175
PRABHU.S
SHADING METHODS
1. Circulism:
176
PRABHU.S
This happens because the tooth of the paper absorbs the graphite quickly
and there are extra layers left on top. Glare/shine is a reality when working
with graphite.
Using the ideas from loose crosshatching, this shading method takes it a
little further.
The stumping powder is smooth and doesnt have any shiny particles.
177
PRABHU.S
The poster created with powder shading looks more beautiful than the
original. The paper to be used should have small grains on it so that the
powder remains on the paper.
fixed by pixel.
For each pixel it must determine the corresponding texture-coordinates (s, t)
178
PRABHU.S
For each x, along this scan line it must compute the correct position on the
face (p(xs, ys)) from that obtain the correct position (s*, t*) within the texture.
The following diagram show that incremental calculation texture co-ordinates.
co
DRAWING SHADOWS
Make one of the objects in the scene a flat planar surface, on which is seen
shadows of other objects.
179
PRABHU.S
i.
Darkening the colors of the pixels where the shadow casts instead of
making them gray. This can be done with alpha blending the shadow with
the area it is cast on.
ii.
Softening the edges of the shadow. This can be done by adding Gaussian
blur to the shadows alpha channel before blending.
Shadows are one of the most important visual cues that we have for understanding
the spatial relationships between objects.
Unfortunately, even modern computer graphics technology has a difficult time
drawing realistic shadows at an interactive frame rate.
One trick that you can use is to pre-render shadows and then apply them to the
scene as a textured polygon.
This allows the creation of soft shadows and allows the computer to maintain a
high frame rate while drawing shadows.
Step 1: Activate and position the shadows
First, activate the shadows and position them using Sketch Ups Shadows toolbar.
Step 2: Draw the Shadows Only
180
PRABHU.S
Have the second page hide the layer containing the objects in the scene.
When the user moves from the first page to the second page, the objects will
disappear, leaving the shadows only.
Next, position the camera to view the shadows from directly above so that we can
use the resulting image to draw the shadows onto a ground plane polygon.
The shadows that are rendered by Sketch Up always have hard edges.
In order to make the shadows look more realistic, we can soften the shadows
using software such as Photoshop or Gimp that includes an image blur tool.
When you create the shadow image, you can use the alpha channel of the
image to make portions of the image transparent.
Next is to create a new material that uses the soft shadow image from the
previous step as a texture.
If the image that we created in the previous step has an alpha channel, then the
alpha channel will be used to carve out transparent areas in the shadow
material.
181
PRABHU.S
Last, create a ground polygon that underlies the objects in the scene and apply
the shadow material to it.
This will create a semi transparent polygon where dark patches are the shadow
areas. Since the shadows are pre-computed, you should turn off the Shadow
option in Sketch up.
Shadow Mapping:
Shadow mapping is just one of many different ways of producing shadows in our
graphics applications.
Shadow mapping is an image space technique, working automatically with objects
created.
Advantages:
No knowledge or processing of the scene geometry is required.
Only a single texture is required to hold shadowing information for each light.
Avoids the high fill requirement of shadow volumes.
Disadvantages:
Aliasing, especially when using small shadow maps.
The scene geometry must be rendered once per light in order to generate the
shadow map for a spotlight.
182
PRABHU.S
UNIT V
FRACTALS & SELF SIMILARITY
Fractal:
183
PRABHU.S
Example:
Koch Curve:
184
PRABHU.S
185
PRABHU.S
186
PRABHU.S
Experimental Copier
187
PRABHU.S
We repeat this process forever, obtaining a sequence of images I0, I1, I2--- called
the Orbit of I0.
Sierpinski Copier
Consider a specific example of a copier that we might call the supercopier or SCopier.
188
PRABHU.S
That shows what one pass through S-Copier produces when the input is the
letter F.
The figure suggests that the iterates converge to the Sierpinski triangle.
At each iteration the individual component FS become one-half as large and they
triple in number.
As more and more iterations are made, the FS approach dots in size, and these dots
are arranged in Sierpinski triangle.
189
PRABHU.S
The final image does not depend on the shape of the F at all, but only the nature of
the Super copier.
It contains 3 lenses.
Each of which reduces the input image to one-half its size.
And move it to a new position.
These three reduced and shifted images and superposed on the printed output.
Scaling and shifting are easily done by affine transformations.
MANDELBROT SETS
The Mandelbrot set is a mathematical set of points, whose boundary generates a
distinctive and easily recognisable two dimensional fractal shape.
Julia and Mandelbrot sets arises from a branch of analysis known as iteration
theory (or dynamical systems theory)
This theory asks what happens when one iterates a function endlessly.
Mandelbrot Sets and Iterated Function Systems
190
PRABHU.S
191
PRABHU.S
That is the system produces each output by squaring its input and adding C.
We assume that the process begins with the starting value S.
So the system generates the sequence of values or Orbits.
d1 = (S)2 + C
d2 = ((S)2 + C)2 + C
d3 = (((S)2 + C)2 + C)2 + C
d4 = ((((S)2 + C)2 + C)2 + C)2 + C
Starting point S
ii.
Given value of C.
192
PRABHU.S
JULIA SETS
The Mandelbrot Set and Julia Sets are extremely complicated sets of points in the
complex plane.
Process of drawing a Filled-in Julia Set is almost identical to that for the
Mandelbrot set.
193
PRABHU.S
We again choose a window in the complex plane and associate pixels with points
in the window.
RANDOM FRACTALS
Fractal shapes are completely deterministic.
They are completely predictable (even though they are very complicated)
In graphics the term fractal has become widely associated randomly generated
curves and surfaces that exhibit a degree of self similarity.
These curves are used to produce Naturalistic shapes for representing objects
such as ragged mountains, grass and fire.
Fractalizing a segment
Random amount
M
L
194
PRABHU.S
The above figure shows this process applied to the line segment S having the end
points A & B.
Note:-
Second Stage:
Each of the two segments has its midpoint perturbed to form points D and E.
195
PRABHU.S
Third Stage:
At final stage, new points F...........I are added.
Random amount
M
L
196
PRABHU.S
Where M = (A+B)/2
The most fractal curves, t is modelled as a Gaussian random variable with mean
and some standard derivation.
197
PRABHU.S
Ray Tracing is a technique for generating an image by tracing the path of light
through pixels in an image plane.
198
PRABHU.S
Introduction:
Ray tracing (ray casting) provides related but even more powerful approach to
rendering scenes.
A buffer as a simple array of pixels positioned in space, with eye looking it into
the scene.
The general question is what does the eye see through this pixel?
A ray of light is arriving at eye through the pixel from some point p in the scene.
The colour of the pixel is determined by the light emanates along the ray from
point p in the scene.
199
PRABHU.S
Reverse Process
That provides dazzling realism that are difficult to create by any other method.
It ability to work comfortably with richer geometric primitives such as
Spheres
Cones and
Cylinders.
200
PRABHU.S
The following code segment shows the basic step in ray tracer.
define the objects and light sources in the scene
setup the camera
for (int r = 0; r < nRows; r++)
for (int c = 0; c < nCols; c++)
{
The scene to be traced by rays through geometric objects and light sources.
A typical scene may contain
Spheres,
Cones,
Boxes,
Cylinders etc...
201
PRABHU.S
When objects have been tested, the object with the smallest is the closest to the
location of the hit point on the objects found.
Computing Color:
The colour of the light that receiving by object, in the direction of the eye is
computed and stored in the pixel.
The following figure shows simple scene consisting of some cylinders, spheres and
cones.
The ray that is shown intersects a sphere, cylinder and two cones.
All the other objects are missed.
202
PRABHU.S
The object with the small hit time, a cylinder in the scene is identified.
The hit spot Phit is easily identified by the ray equation.
Phit = eye + dirr,c thit
(hit spot)
//create a scene
Scn.read(myScene.dat);
203
PRABHU.S
for 0<z<1
Computer generated images can be made much more lively and realistic by
painting textures on various surfaces.
The following figure shows a ray-traced scene with several examples of textures.
OpenGL is used to render each face.
For each face F, a pair of texture co-ordinates is attached to each vertex of face.
Then openGL painted each pixel inside the face by using the colour of the
corresponding point within a texture image.
Solid texture
ii.
Image texture
Solid Texture
The ray tracer reveals the colour of the texture at each point on the surface of the
object.
204
PRABHU.S
Example:
Imagine a 3D checker bound made up of alternating red and black cubes stacked
up throughout all of space.
We position one of the cubelets with a vertex (0,0,0) and the size S=(S.x,S.y,S.x)
All other cubes have his same size (a width of S.x, aheight of S.y etc)
That are placed adjacent to one another in all three dimensions.
It is easy to write an expression for such a checkerboard texture.
Jump(x,y,z) = ((int) (A+x/S.x) + (int) (A+y/S.y) + (int) (A+z/S.z))%2
The following figure shows a generic sphere and generic cube composed of
material with this solid texture
205
PRABHU.S
One of the great strength of the ray-tracing method is the ease with which it can
handle both reflection and refraction of light.
206
PRABHU.S
When the surface is mirror like or transparent (or both) the light I that reaches the
eye may have five components
I = Iamb + Ispec + Irefl + Itran
where
Iamb :
Idiff :
Ispe :
Irefl :
Itran :
ambient component
diffuse component
specular component
reflected light component arising from the light IR.
Transmitted light components, arising from the light IT.
The diffuse and specular parts arise light sources in the environment that are at Ph.
The following figure shows how the number of contributions of light grows at
each contact point.
207
PRABHU.S
Local component:
Is simply the sum of the usual ambient, diffuse and specular reflections at Pn.
Local components depend only on actual light sources.
They are not computed on the basis of casting secondary rays.
208
PRABHU.S
Figure (b) abstracts the various light components into a tree of light
contributions.
When a ray of light strikes a transparent object, a portion of the ray penetrates the
object, as shown in fig.
The ray will change direction from dir to t if the speed of light is different in
medium 1 than in medium 2.
209
PRABHU.S
According to CSG, complex shapes are defined by set operations (also called
Boolean operations) on simpler shapes.
Objects such as lenses a hallow fishbowls are easily formed by combining the
generic shapes.
Such objects are variously called compound objects (or) Boolean objects (or) CSG
objects.
The ray tracing method extends in a very organized way to compound objects.
It is one of the great strengths of ray tracing that it fits so naturally with CSG
models.
210
PRABHU.S
L = S1 S2
Fig(b) shows a bowl, constructed using the difference operation.
Applying the difference operation is analogous to removing material to cutting or
carving.
The bowl is specified by
B = (S1 S2) C
The solid globe S1 is hollowed out by removing all the points of the inner sphere S2.
The top is then opened by removing all points in the cone C.
211
PRABHU.S
The following fig. Shows a rocket constructed as the union of two cones and two
cylinders.
That is,
R = C1 C2 C3 C4
212
PRABHU.S
QUESTION BANK
UNIT I
2D PRIMITIVES
PART A
213
PRABHU.S
214
PRABHU.S
215
PRABHU.S
.
13.What are Ellipse equations?
General Ellipse equation
216
PRABHU.S
Minor Axes
The minor axis spans the shorter dimension of the ellipse, bisecting the major axis
at the halfway position (ellipse center) between the two foci.
217
PRABHU.S
20.Define Grayscale.
With monitors that have no color capability, color functions can be used in an
application program to set the shades of gray, or grayscale, for displayed
primitives.
Numeric values over the range from 0 to 1 can be used to specify grayscale levels,
which are then converted to appropriate binary codes for storage in the raster.
This allows the intensity settings to be easily adapted to systems with differing
grayscale capabilities.
218
PRABHU.S
219
PRABHU.S
Bold face
Underline
Italics
27.What is Translation?
A translation is applied to an object by repositioning it along a straight-line path
from one coordinate location to another.
220
PRABHU.S
28.What is Rotation?
A two-dimensional rotation is applied to an object by repositioning it along a
circular path in the xy plane.
To generate a rotation, we specify a rotation angle and the position (x1,y1) of the
rotation point (or pivot point) about which the object is to be rotated.
29.What is Scaling?
A scaling transformation alters the size of an object.
This operation can be carried out for polygons by multiplying the coordinate
values (x, y) of each vertex by scaling factors sx, and sy, to produce the
transformed coordinates (x', y'):
221
PRABHU.S
31.What is Reflection?
A reflection is a transformation that produces a mirror image of an object.
The mirror image for a two-dimensional reflection is generated relative to an axis
of reflection by rotating the object 180" about the reflection axis.
32.What is Shear?
A transformation that distorts the shape of an object such that the transformed
shape appears as if the object were composed of internal layers that had been
caused to slide over each other is called a shear.
Two common shearing transformations are those that shift coordinate x values and
those that shift y values.
PART B
1.Explain in detail Line Drawing algorithms with example.
2.Explain in detail Circle Drawing algorithms with example.
3.Explain in detail Ellipse Drawing algorithms .
222
PRABHU.S
UNIT II
3D CONCEPTS
PART A
223
PRABHU.S
224
PRABHU.S
Perspective Projection
of objects.
4.
225
PRABHU.S
226
PRABHU.S
227
PRABHU.S
20.What is Spline?
A spline is a flexible strip used to produce a smooth curve through a designated set
of points.
Several small weights are distributed along the length of the strip to hold it in
position on the drafting table as the curve is drawn.
The term spline curve originally referred to a curve drawn in this manner.
228
PRABHU.S
velocity,
force,
electric fields,
electric current.
229
PRABHU.S
To color code a scalar data set, we choose a range of color and map the range of
data values to the color range.
For example, blue could be assigned to the lowest scalar value, and red could be
assigned to the highest value.
26.What is 3D Translation?
In a three-dimensional homogeneous coordinate representation, a point is
translated
from position P = (x, y, z) to position P' = (x', y', z') with the matrix operation
230
PRABHU.S
27.What is 3D Rotation?
To generate a rotation transformation for an object, we must designate an axis of
rotation (about which the object is to be rotated) and the amount of angular
rotation.
28.What is 3D Shear?
Shearing transformations can he used to modify object shapes.
They are also useful in three-dimensional viewing for obtaining general projection
transformations.
In two dimensions, we discussed transformations relative to the x or y axes to
produce
distortions in the shapes of objects.
In three dimensions, we can also generate shears relative to the z axis.
231
PRABHU.S
232
PRABHU.S
PART B
233
PRABHU.S
UNIT III
GRAPHICS PROGRAMMING
PART A
1.What is Color Model?
A color model is a method for explaining the properties or behavior of color within some
particular context.
234
PRABHU.S
In CMY color model, cyan can be formed by adding green and blue light.
Therefore, when white light is reflected from cyan-colored ink, the reflected light
must have no red component.
That is, red light is absorbed, or subtracted, by the ink.
6. What are the dots used in Printing Proceses?
The printing process often used with the CMY model generates a color point with a
collection of four ink dots, (like RGB monitor uses a collection of three phosphor dots).
Three dots are used for each of the primary colors (cyan, magenta, and yellow).
235
PRABHU.S
Where the white is represented in the RGB system as the unit column vector.
9. How to convert CMY into RGB?
we convert from a CMY color representation to an RGB representation
with the matrix transformation
Where black is represented In the CMY system as the unit column vector.
236
PRABHU.S
237
PRABHU.S
15. Keyframe
A keyframe is a detailed drawing of the scene at a certain time in the animation
sequence.
Within each key frame, each object is positioned according to the time for that
frame.
Some key frames are chosen at extreme positions in the action.
16. Generation of in-between frames
In-betweens are the intermediate frames between the key frames.
The number of in-betweens needed is determined by the media to be used to
display the animation.
Film requires 24 frames per second, and graphics terminals are refreshed at the rate
of 30 to 60 frames per second.
17.What is Morphing?
Transformation of object shapes from one form to another is called morphing,
Which is a shortened form of metamorphosis.
Morphing methods can he applied to any motion or transition involving a change
in shape.
18. What are the matrices in Graphics pipeline of OpenGL?
The important three matrices are
iv.
v.
Projection matrix
vi.
Viewport matrix
238
PRABHU.S
viewport matrix maps the surviving portion of the block into a 3D viewport.
This matrix maps the standard cube into a block shape.
Whose x & y values extend across the viewport.
Whose 2 component extends from 0 to 1
239
PRABHU.S
240
PRABHU.S
UNIT IV
RENDERING
UNIT IV
1.What is Shading Model?
A shading model dictates how light is scattered or reflected from a surface.
A shading model frequently used in graphics in two types of light sources.
Point light sources
Ambient light
2.How many ways the incident light interact with the surfaces?
The incident light interact with the surface in three different ways.
Some is absorbed by the surface and converted into heat.
Some is reflected from the surface.
Some is transmitted into the interior of the object, as in the case of piece of the
glass.
241
PRABHU.S
242
PRABHU.S
Shading Model
Flat Shading
Smooth Shading
Gouraud Shading
Phong Shading
o glShadeModel(GL_SMOOTH);
243
PRABHU.S
Because it applied the shading model once per vertex right after the modelview
transformation
Normal vector information is not passed to the rendering stage following the
perspective transformation and division.
15.What is Texture?
A texture can be uniform, such as a brick wall, or irregular, such as wood grain or
marble.
The realism of an image is greatly enhanced by adding surface texture to the
various faces of a mesh object.
16.What are the types of Texture?
There are numerous sources of textures.
The most common textures are
Bitmap textures
Procedural texture
244
PRABHU.S
245
PRABHU.S
//sphere intensity
return 0.2
//dark background
else
forward or backward
movement along u :
left or right
movement along v :
up or down
246
PRABHU.S
PART B
247
PRABHU.S
UNIT V
FRACTALS
1.What is Fractal?
248
PRABHU.S
The two most famous peano curves are Hilbert and Sierpeniski curves.
Some low order Hilbert curves are shown below
It contains 3 lenses.
Each of which reduces the input image to one-half its size.
249
PRABHU.S
Julia and Mandelbrot sets arises from a branch of analysis known as iteration
theory (or dynamical systems theory)
This theory asks what happens when one iterates a function endlessly.
10.What are Random Fractals?
250
PRABHU.S
First Stage:
The midpoint of AB is perturbed to form point C.
Second Stage:
Each of the two segments has its midpoint perturbed to form points D and E.
Third Stage:
At final stage, new points F...........I are added.
Where M = (A+B)/2
The most fractal curves, t is modelled as a Gaussian random variable with mean
and some standard derivation.
251
PRABHU.S
Ray Tracing is a technique for generating an image by tracing the path of light
through pixels in an image plane.
When objects have been tested, the object with the smallest is the closest to the
location of the hit point on the objects found.
Computing Color:
The colour of the light that receiving by object, in the direction of the eye is
computed and stored in the pixel.
252
PRABHU.S
The ray that is shown intersects a sphere, cylinder and two cones.
All the other objects are missed.
18.What is Solid Texture?
The ray tracer reveals the colour of the texture at each point on the surface of the
object.
Complex shapes are defined by set operations (also called Boolean operations) on
simpler shapes.
Objects such as lenses a hallow fishbowls are easily formed by combining the
generic shapes.
Such objects are variously called compound objects (or) Boolean objects (or) CSG
objects.
253
PRABHU.S
PART B
254
PRABHU.S
ROAD MAP
UNIT I
Output primitives
Definition
Simple geometric components
Additional output primitives
Line
Stair step effect (jaggies)
Line-drawing algorithms
Slope-intercept equation
DDA algorithm
Bresenham algorithm
derivations
algorithm
problem
Circle algorithm
General form
Polar form
Midpoint circle algorithm
Theory & derivation
Algorithm
Problem
Ellipse algorithm
General form
Polar form
255
PRABHU.S
o Solid lines,
o Dashed lines,
o And dotted lines
Width
color.
256
PRABHU.S
Size
Color
Diagram
2D transformation
Translation,
Diagram
Equation
Matrix format
Rotation
Diagram
Equation
Matrix format
Scaling
Diagram
Equation
Differential scaling.
Matrix format
Reflection
Diagrams
Definition
Matrix format
Shear
Diagram
Equation
257
PRABHU.S
Matrix format
Transformation functions
Translate (trans-atevector, matrixtranslate)
Rotate (theta, matrixrotate)
Scale (scalevector, matrixscale)
composematrix (matrix2, matrix1, matrixout)
UNIT II
3D Concepts
Depth cueing
Projections
Parallel projection
Diagram
Orthographic parallel projection
oblique parallel projection
diagrams
Perspective projection
Diagrams
Equations
3D Representation
Boundary representations
Space-partitioning representation
Polygon surfaces
258
PRABHU.S
Polygon tables
Geometric tables
Attribute tables
7. A vertex table,
8. An edge table, and
9. A polygon table.
Plane equations
Polygon meshes
Triangle strip
Quadrilateral mesh
Problem
Solution
259
PRABHU.S
Diagrams
3D Transformation
Translation
Diagram
Equation
Matrix form
Rotation
Diagram
Coordinate-axes rotations
Equation
Matrix form
Scaling
Diagram
Equation
Matrix form
Shear
Diagram
Matrix form
Reflecetion
Diagram
Matrix
260
PRABHU.S
Viewing pipeline
Diagram
Viewing coordinates
Diagram
Matrix
UNIT III
Color models
Chromaticity diagram
Colors representation
Diagram
Uses of chromaticity diagram
261
PRABHU.S
262
PRABHU.S
Animation
Design of animation sequences
9. Storyboard layout
10. Object definitions
11. Key-frame specifications
12. Generation of in-between frames
Raster animations
Explanation
Diagrams
Key-frame systems
Morphing
Diagrams
OPENGL
Advantages:
Features:
OpenGL operation
Diagram
Glut
Sample program
Glut functions
Basic graphics primitives
Sample code
Format of OpenGL commands
OpenGL data types
263
PRABHU.S
Sample code
Other graphics primitives in OpenGL
Example
viii.
Projection matrix
Diagram
ix.
Viewport matrix
Diagram
264
PRABHU.S
UNIT IV
Introduction to Shading Model
Light sources
Black body
types of reflection
Diffuse scattering
Computing the diffuse component
Diagram
Lamberts Law
diffuse reflection coefficient
Specular reflection
Flat Shading
OpenGL function
Diagram
lateral inhibition
Smooth Shading
Types
3. Gouraud Shading
i. OpenGL function
ii. Diagrams
4. Phong Shading
i. Diagrams
ii. Drawback:
265
PRABHU.S
266
PRABHU.S
Drawing Shadows
Diagram
Explanation
Steps
Step 1: Activate and position the shadows
Step 2: Draw the Shadows Only
Step 3: Draw the Shadows from Above
Step 4: Soften the Shadows
Step 5: Create a new Shadow Material
Step 6: Apply the material to a ground polygon
Shadow Mapping:
Advantages:
Disadvantages
267
PRABHU.S
UNIT V
o Diagrams
Mandelbrot sets
Iteration theory
Mandelbrot sets and iterated function systems
Diagram
268
PRABHU.S
Julia sets
Diagrams
Drawing filled-in julia sets
Random fractals
Fractalizing a segment
o Diagram
Stages of fractalization
o First stage:
Diagram
o Second stage:
Diagram
o Third stage:
Diagram
Calculation of fractalization in a program
Diagram
Equation
Fract() function
Drawing a fractal curve
Drawfractal() function
Ray tracing
Introduction
Diagram
Reverse process
Features of ray tracing:
269
PRABHU.S
o Diagram
Object list:
o Diagram
o Diagram
o Equation
270
PRABHU.S
Diagram
Equations
Union of four primitives
Diagram
Equations
271
PRABHU.S
DIAGRAMS
UNIT I
Stair step Effect (jaggies)
Line
y= m.x + b
Circle
272
PRABHU.S
Ellipse
Line Types
273
PRABHU.S
Fill Styles
Hatch Fill
274
PRABHU.S
Character attribute
UNIT II
Parallel Projection
275
PRABHU.S
Perspective Projection
276
PRABHU.S
Polygon Table
277
PRABHU.S
Triangle strip
Quadrilateral mesh
Sphere
278
PRABHU.S
ELLIPSOID
SPLINE
Translation
279
PRABHU.S
Rotation
Scaling
280
PRABHU.S
Reflection
281
PRABHU.S
Shear
Viewing Pipeline
Viewing Coordinates
282
PRABHU.S
283
PRABHU.S
UNIT III
Chromaticity diagram
284
PRABHU.S
285
PRABHU.S
Morphing
286
PRABHU.S
Opngl operation:
287
PRABHU.S
288
PRABHU.S
Projection matrix
Viewport matrix
289
PRABHU.S
290
PRABHU.S
UNIT IV
Flat Shading
Gourad Shading
291
PRABHU.S
Phong Shading
292
PRABHU.S
PROCEDURAL TEXTURE
293
PRABHU.S
294
PRABHU.S
295
PRABHU.S
296
PRABHU.S
DRAWING SHADOWS
297
PRABHU.S
UNIT V
298
PRABHU.S
299
PRABHU.S
Experimental Copier
300
PRABHU.S
Sierpinski Copier
301
PRABHU.S
MANDELBROT SETS
IFS System
302
PRABHU.S
JULIA SETS
Random Fractal
First Stage
303
PRABHU.S
Second Stage
Third Stage
Ray Tracing
304
PRABHU.S
Object list
305
PRABHU.S
306
PRABHU.S
307
PRABHU.S
Union of 4 primitives
.THE END..