Sei sulla pagina 1di 307

1

PRABHU.S

COMPUTER GRAPHICS
CS2401

STUDY MATERIAL

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


KARPAGA VINAYAGA COLLEGE OF ENGINEERING AND TECHNOLOGY
MADURANTAKAM

S.PRABHU
ASSISTANT PROFESSOR

PREPARED BY S.PRABHU AP/CSE KVCET

2
PRABHU.S

INDEX

SN

TOPIC

Page

Syllabus

OUTPUT PRIMITIVES

3D CONCEPTS

29

GRAPHICS PROGRAMMING

127

RENDERING

159

FRACTALS

182

QUESTION BANK

212

ROAD MAP

254

DIAGRAMS

271

PREPARED BY S.PRABHU AP/CSE KVCET

3
PRABHU.S

CS2401 COMPUTER GRAPHICS

UNIT I 2D PRIMITIVES
output primitives Line, Circle and Ellipse drawing algorithms - Attributes of output
primitives Two dimensional Geometric transformation - Two dimensional viewing
Line, Polygon, Curve and Text clipping algorithms.
UNIT II 3D CONCEPTS
Parallel and Perspective projections - Three dimensional object representation
Polygons, Curved lines, Splines, Quadric Surfaces,- Visualization of data sets - 3D
transformations Viewing -Visible surface identification.
UNIT III GRAPHICS PROGRAMMING
Color Models RGB, YIQ, CMY, HSV Animations General Computer Animation,
Raster, Keyframe - Graphics programming using OPENGL Basic graphics primitives
Drawing three dimensional objects - Drawing three dimensional scenes
UNIT IV RENDERING
Introduction to Shading models Flat and Smooth shading Adding texture to faces
Adding shadows of objects Building a camera in a program Creating shaded objects
Rendering texture Drawing Shadows.
UNIT V FRACTALS
Fractals and Self similarity Peano curves Creating image by iterated functions
Mandelbrot sets Julia Sets Random Fractals Overview of Ray Tracing
Intersecting rays with other primitives texture Reflections and
Transparency Boolean operations on Objects

TEXT BOOKS:
1. Donald Hearn, Pauline Baker, Computer Graphics C Version, second edition,
Pearson Education,2004.
2. F.S. Hill, Computer Graphics using OPENGL, Second edition, Pearson Education,
2003.

PREPARED BY S.PRABHU AP/CSE KVCET

4
PRABHU.S

UNIT I
OUTPUT PRIMITIVES
 The picture can be described in several ways.
 Picture may specify by the set of pixels in raster display.
 Or we can describe the picture as a set of complex objects.
 Such as trees and terrain or furniture and wall.
Output Primitives
 Graphics programming packages provide functions to describe a scene in terms of
these basic geometric structures, referred to as output primitives.
 To group the sets of output primitives into more complex structures.
 Each output primitive is specified with input coordinate data and other information
about the way that object is to be displayed.

Simple geometric components


 Points and straight line segments are the simplest geometric components of
pictures.

Additional output primitives


 That can be used to construct a picture include
 circles and other conic sections,
 quadric surfaces,
 spline curves and surfaces,
 polygon color areas, and
 character strings.

PREPARED BY S.PRABHU AP/CSE KVCET

5
PRABHU.S

POINTS_LINES_INTRODUCTION
 Shapes and colors of the objects can be described internally with pixel arrays or
with sets of basic geometric structures.
 Such as
straight line segments and
polygon color areas.
 The scene is then displayed either by loading the pixel arrays into the frame buffer
or by scan converting the basic geometric-structure specifications into pixel
patterns.
 Typically, graphics programming packages provide functions to describe a scene
in terms of these basic geometric structures, referred to as output primitives,
 And to group sets of output primitives into more complex structures.
 Each output primitive is specified with input coordinate data and other information
about the way that object is to be displayed.
 Points and straight line segments are the simplest geometric components of
pictures.
 Additional output primitives that can be used to construct a picture include
circles and other conic sections,
quadric surfaces,
spline curves and surfaces,
polygon color areas, and
character strings.

PREPARED BY S.PRABHU AP/CSE KVCET

6
PRABHU.S

POINTS
 Point plotting is accomplished by converting a single coordinate position furnished
by an application program into appropriate operations for the output device in use.
 With a CRT monitor, for example, the electron beam is turned on to illuminate the
screen phosphor at the selected location.
 How the electron beam is positioned depends on the display technology.
Random-scan system or Vector System
 It stores point-plotting instructions in the display list, and coordinate values in
these instructions are converted to deflection voltages that position the electron
beam at the screen locations to be plotted during each refresh cycle.

Black-and-white raster system


 A point is plotted by setting the bit value corresponding to a specified screen
position within the frame buffer to 1.
 Then, as the electron beam sweeps across each horizontal scan line, it emits a burst
of electrons (plots a point) whenever a value of 1 is encountered in the frame
buffer.

RGB system,
 The frame buffer is loaded with the codes for the intensities that are to be
displayed at the screen pixel positions.

PREPARED BY S.PRABHU AP/CSE KVCET

7
PRABHU.S

LINES
 Line drawing is accomplished by calculating intermediate positions along the line
path between two specified endpoint positions.
 An output device is then directed to fill in these positions between the endpoints.
Analog Display Devices
 For analog devices, such as a vector pen plotter or a random-scan display, a
straight line can be drawn smoothly from one endpoint to the other.
 Linearly varying horizontal and vertical deflection voltages are generated that are
proportional to the required changes in the x and y directions to produce the
smooth line.

Digital Display Devices


 Digital devices display a straight line segment by plotting discrete points between
the two endpoints.
 Discrete coordinate positions along the line path are calculated from the equation
of the line.

Stair step Effect (jaggies)


 For a raster video display, the line color (intensity) is then loaded into the frame
buffer at the corresponding pixel coordinates.
 Reading from the frame buffer, the video controller then "plots" the screen pixels.
 Screen locations are referenced with integer values.
 So plotted positions may only approximate actual Line positions between two
specified endpoints.

PREPARED BY S.PRABHU AP/CSE KVCET

8
PRABHU.S

 For example, A computed line position of (10.48,20.51), would be converted to


pixel position (10,21).
 Thus rounding of coordinate values to integers causes lines to be displayed with a
stair step appearance ("the jaggies"), as in the following figure,

How the pixel positions are referenced?


 Pixcel positions referenced by scan line number and column number.

What is getpixel ( ) function?


 Sometimes we want to be able to retrieve the current frame buffer intensity setting
for a specified location.
 We accomplish this with the low-level function
getpixel (x, y )

PREPARED BY S.PRABHU AP/CSE KVCET

9
PRABHU.S

LINE-DRAWING ALGORITHMS
 The Cartesian slope-intercept equation for a straight line is
y= m.x + b

 where m representing the slope of the line and


b representing the intercept.
 Given that the two endpoints of a line segment are specified at position(x1,y1) and
(x2,y2), as shown in following Fig.

 we can determine values for the slope m and y intercept b with the following
calculations:
2

 Algorithms for displaying straight lines are based on the line equation 1 and the
calculations given in Eqn. 2 and 3.
 For any given x interval x along a line, we can compute the corresponding y
interval y from Eqn 2

PREPARED BY S.PRABHU AP/CSE KVCET

10
PRABHU.S

 Similarly, we can obtain the x interval x corresponding to a specified y as

 These equations form the basis for determining deflection voltages in analog
devices.
 For lines with slope magnitudes | m | < 1, x can be set proportional to a small
horizontal deflection voltage and the corresponding vertical deflection is then set
proportional to y as calculated from Eqn 4.
 For lines whose slopes have magnitudes | m | > 1, y can be set proportional to a
small vertical deflection voltage with the corresponding horizontal deflection
voltage set proportional to x, calculated from Eqn 5.
 For lines with m = 1, x = y and the horizontal and vertical deflections voltages
are equal.
 In each case, a smooth line with slope m is generated between the specified
endpoints.
 On raster systems, lines are plotted with pixels, and step sizes in the horizontal and
vertical directions are constrained by pixel separations.
 That is, we must "sample" a line at discrete positions and determine the nearest
pixel to the line at each sampled position.
 This scan conversion process for straight lines is shown as

PREPARED BY S.PRABHU AP/CSE KVCET

11
PRABHU.S

 Above a near horizontal line with discrete sample positions along the x axis.
DDA Algorithm
 The digital differential analyzer (DDA) is a scan-conversion line algorithm based
on calculating either y or x.
 We sample the line at unit intervals in one coordinate and determine corresponding
integer values nearest the line path for the other coordinate.
 Consider first a line with positive slope, as shown in Fig.

 If the slope is less than or equal to 1, we sample at unit x intervals (x = 1) and


compute each successive y value as

 Subscript k takes integer values starting from 1, for the first point, and increases

PREPARED BY S.PRABHU AP/CSE KVCET

12
PRABHU.S

by 1 until the final endpoint is reached.


 Since m can be any real number between 0 and 1, the calculated y values must be
rounded to the nearest integer.
 For lines with a positive slope greater than 1, we reverse the roles of x and y.
 That is, we sample at unit y intervals (y = 1) and calculate each succeeding x
value as

 If this processing is reversed, so that the starting endpoint is at the right, then either
we have x = - 1 and y = -1

 When the start endpoint is at the right (for the same slope), we set x = -1.
 Similarly, when the absolute value of a negative slope is greater than 1, we use
y = -1.

Advantages
 The DDA algorithm is a faster method for calculating pixel positions than older
methods.
 It eliminates the multiplication, so that appropriate increments are applied in the x
or y direction to step to pixel positions along the line path.

PREPARED BY S.PRABHU AP/CSE KVCET

13
PRABHU.S

BRESENHAM'S LINE ALGORITHM


 An accurate and efficient raster line-generating algorithm, developed by Bresenham.
 It use only incremental integer calculations that can be adapted to display circles and
other curves.
 Figures illustrate sections of a display screen where straight line segments are to be
drawn.

 The vertical axes show-scan-line positions.


 The horizontal axes identify pixel columns.
 Sampling at unit x intervals in these examples, we need to decide which of two
possible pixel positions is closer to the line path at each sample step.
 Starting from the left endpoint shown in Fig a, we need to determine at the next
sample position whether to plot the pixel at position (11, 11) or the one at (11, 12).
 Similarly, Fig b shows a negative slope-line path starting from the left endpoint at
pixel position (50, 50).
 In this one, do we select the next pixel position as (51,51) or as (51,49).
 These questions are answered with Bresenham's line algorithm by testing the sign
of an integer parameter, whose value is proportional to the difference between the
separations of the two pixel positions from the actual line path.

PREPARED BY S.PRABHU AP/CSE KVCET

14
PRABHU.S

 To illustrate Bresenham's approach, we first consider the scan-conversion process


for lines with positive slope less than 1.
 Pixel positions along a line path are then determined by sampling at unit x intervals.
 Following figure demonstrates the kth step in this process.

 Assuming we have determined that the pixel at (xk, yk) is to be displayed, we next
need to decide which pixel to plot in column xk+1.
 Our choices are the pixels at positions (xk+1,yk) and (xk+1, yk+1).
 They coordinate on the mathematical line at pixel column position xk + l is
calculated as

Then

And

PREPARED BY S.PRABHU AP/CSE KVCET

15
PRABHU.S

 The difference between these two separations is

 A decision parameter pk for the kth step in the line algorithm.


 It involves only integer calculations.
 We accomplish this by substituting m = y/x,
 where y and x are the vertical and horizontal separations of the endpoint
positions, and defining:

 At step k + 1, the decision parameter is evaluated from

 Subtracting from preceding equation, we have

 at the starting pixel position (xo, yo) and with m evaluated as y/x

PREPARED BY S.PRABHU AP/CSE KVCET

16
PRABHU.S

ALGORITHM

1. Input the two line endpoints and store the first end point in (x0,y0).
2. Load (x0, y0) into the frame buffer, (i.e) plot the first point.
3. Calculate constants x, y, 2y and 2y-2x and obtain the starting value
for the decision parameters as
P=2y-x.
4. At each x, along the line, starting at k=0 per turn the following test.
If pk>0, the point to plot is (xk+1, yk) and
Pk+1=pk+2y
Otherwise the next point to plot is (xk+1,yk+1)
Pk+1= pk+2y-2x.
5. Repeat step 4 x times.

PREPARED BY S.PRABHU AP/CSE KVCET

17
PRABHU.S

PREPARED BY S.PRABHU AP/CSE KVCET

18
PRABHU.S

CIRCLE-DRAWING ALGORITHMS
 A circle is defined as the set of points that are all at a given distance r from a
center position (x, y).

 This distance relationship is expressed by the Pythagorean theorem in Cartesian


coordinates as
1

 Another way to eliminate the unequal spacing shown in the above figure is to
calculate points along the circular boundary using polar coordinates r and .

 Expressing the circle equation in parametric polar form yields the pair of equations

PREPARED BY S.PRABHU AP/CSE KVCET

19
PRABHU.S

 Computation can be reduced by considering the symmetry of circles.


 The shape of the circle is similar in each quadrant.
 We can generate the circle section in the second quadrant of the xy plane by noting
that the two circle sections are symmetric with respect to the y axis.
 And circle sections in the third and fourth quadrants can be obtained from sections
in the first and second quadrants by considering symmetry about the x axis.

MIDPOINT CIRCLE ALGORITHM


 First we can set up our algorithm to calculate pixel positions around a circle path
centered at the coordinate origin (0,0).
 Then each calculated position (x, y) is moved to its proper screen position by
adding xc to x and yc to y.
 Along the circle section from x = 0 to x = y in the first quadrant, the slope of the
curve varies from 0 to -1.
 Therefore, we can take unit steps in the positive x direction over this octant and use
a decision parameter to determine which of the two possible y positions is closer to
the circle path at each step.
 Positions in the other seven octants are then obtained by symmetry.
 To apply the midpoint method, we define a circle function:
3

 the relative position of any point ( x , y ) can be determined by checking the

PREPARED BY S.PRABHU AP/CSE KVCET

20
PRABHU.S

sign of the circle function:

 Thus, the circle function is the decision parameter in the midpoint algorithm, and
we can set up incremental calculations for this function as we did in the line
algorithm.

 The above figure shows the midpoint between the two candidate pixels at
Sampling position xk + 1.
 Our decision parameter is the circle function (equation 3 ) evaluated at the
midpoint between these two pixels:

PREPARED BY S.PRABHU AP/CSE KVCET

21
PRABHU.S

 Successive decision parameters are obtained using incremental calculations.


 We obtain a recursive expression for the next decision parameter by evaluating
the circle function at sampling position xk+1 + 1 = xk + 2:

 Evaluation of the terms 2xk+1 and 2yk+1 can also be done incrementally as

 The initial decision parameter is obtained by evaluating the circle function


at the start position (x0, y0) = (0, r ) :

PREPARED BY S.PRABHU AP/CSE KVCET

22
PRABHU.S

 If the radius r is specified as an integer, we can simply round po to

 since all increments are integers.

ALGORITHM
1. Input radius r and circle center (xc, yc) and obtain the first point on the
circumference of a circle centered on the origin as
(x0,y0)=(0,r).
2. Calculate the initial value of the decision parameters as
p0=(5/4)-r.
3. At each xk position starting at k=0 perform the following test.
If pk<0, the next point along the circle centered on (0, 0) is (xk+1, yk) and
pk+1=pk+2xk+1+1
Otherwise, the next point along the circle is (xk+1, yk-1) and
pk+1=pk+2xk+1+1-2yk+1
Where 2xk+1=2xk+2 and 2yk+1=2yk+2
4. Determine symmetry in the other seven octants.
5. Move each calculated pixel position (x.y) onto the circular path centered on (x0, y0)
and plot the coordinate values
x=x+xc, y=y+yc
6. Repeat the step 3 through 5 until x>=y.

PREPARED BY S.PRABHU AP/CSE KVCET

23
PRABHU.S

Example:

To plot the pixel positions in first quadrant,

PREPARED BY S.PRABHU AP/CSE KVCET

24
PRABHU.S

ELLIPSE-DRAWING ALGORITHMS
 An ellipse is an elongated circle.
 Therefore, elliptical curves can be generated by modifying circle-drawing
procedures to take into account the different dimensions of an ellipse along the
major and minor axes.

Properties of Ellipses
 An ellipse is defined as the set of points such that the sum of the distances from
two fixted positions (foci) is the same for all points.

 If the distances to the two foci from any point P = (x, y) on the ellipse are labeled
dl and d2, then the general equation of an ellipse can be stated as

 we can rewrite the general ellipse equation in the form

 where the coefficients A, B, C, D, E , and F are evaluated in terms of the focal


coordinates.

PREPARED BY S.PRABHU AP/CSE KVCET

25
PRABHU.S

Major Axes
 The major axis is the straight line segment extending from one side of the ellipse
to the other through the foci.

Minor Axes
 The minor axis spans the shorter dimension of the ellipse, bisecting the major axis
at the halfway position (ellipse center) between the two foci.

Polar coordinate
 Using polar coordinates r and 0. we can also describe the ellipse in standard
position with the parametric equations:

Symmetry considerations
 Symmetry considerations can be used to further reduce computations.
 An ellipse in standard position is symmetric between quadrants, but unlike a circle,
it is not symmetric between the two octants of a quadrant.
 Thus, we must calculate pixel positions along the elliptical arc throughout one
quadrant, then we obtain positions in the remaining three quadrants by symmetry
as in the diagram.

PREPARED BY S.PRABHU AP/CSE KVCET

26
PRABHU.S

Midpoint Ellipse Algorithm


 we determine points ( x , y) for an ellipse in standard position centered on the
origin.
 And then we shift the points so the ellipse is centered at ( x , y,).
 To display the ellipse in nonstandard position, we could then rotate the ellipse
about its center coordinates to reorient the major and minor axes.
 The midpoint ellipse method is applied throughout the first quadrant in
two parts.
 Following figure shows the division of the first quadrant according to the
slope of an ellipse with rx < ry.

 We define an ellipse function (xc, yc) = (0,0) as


1

 which has the following properties:

PREPARED BY S.PRABHU AP/CSE KVCET

27
PRABHU.S

 Thus, the ellipse function fellipse(x, y) serves as the decision parameter in the midpoint algorithm.
 At each sampling position, we select the next pixel along the ellipse path according
to the sign of the ellipse function evaluated at the midpoint between the two
candidate pixels.
 The ellipse slope is calculated from Eqn 1 as

 At the boundary between region 1 and region 2, dy/dx = - 1 and

 Therefore, we move out of region 1 whenever

 Following figure shows the midpoint between the two candidate pixels at sampling
position xk + 1 in the first region.

PREPARED BY S.PRABHU AP/CSE KVCET

28
PRABHU.S

 Assuming position (xk, yk) has been selected at the previous step, we determine
the next position along the ellipse path by evaluating the decision parameter at this
midpoint:

 At the next sampling position (xk+, + 1 = x, + 2), the decision parameter


for region 1 is evaluated as

PREPARED BY S.PRABHU AP/CSE KVCET

29
PRABHU.S

 In region 1, the initial value of the decision parameter is obtained by evaluating


the ellipse function at the start position ( x0 , y0) = (0 , r):

 Over region 2, we sample at unit steps in the negative y direction, and the midpoint
is now taken between horizontal pixels at each step.

 For this region, the decision parameter is evaluated as

PREPARED BY S.PRABHU AP/CSE KVCET

30
PRABHU.S

 To determine the relationship between successive decision parameters in region 2,


we evaluate the ellipse function at the next sampling step yk+1 - 1=yk - 2

 When we enter region 2, the initial position (x0 , y0) is taken as the last position
selected in region 1 and the initial derision parameter in region 2 is then

ALGORITHM
1. Input radius rx, ry and ellipse center (xc, yc) and obtain the first point on the
circumference of a ellipse centered on the origin as
(x0,y0)=(0,r).
2. Calculate the initial value of the decision parameters in region 1 as
p0=ry2+rx2ry+1/4 rx2.
3. At each xk position in region 1, starting at k=0 perform the following test.

PREPARED BY S.PRABHU AP/CSE KVCET

31
PRABHU.S

If pk<0, the next point along the circle centered on (0, 0) is (xk+1, yk) and
P1k+1=p1k+2 ry2xk+1+ ry2
Otherwise, the next point along the circle is (xk+1, yk-1) and
pk+1=pk+2 ry2xk+1-2 rx2yk+1+ ry2
With
2 ry2xk+1=2 ry2xk+2 and 2 rx2yk+1=2 rx2yk+2
And continue until 2 ry2 x>=2 rx2 y
4. Calculate the initial value of the decision parameters in region 2 using the last
point (x0,y0) calculated in region 1 as
P20= ry2 (x0+1/2)2 + rx2 (y0-1)2- rx2 ry2
5. At each yk position in region 2, starting at k=0, perform the following test.
If p2k>0, the next point along the circle centered on (0, 0) is (xk, yk-1) and
P2k+1=p2k-2 rx2yk+1+ rx2
Otherwise, the next point along the circle is (xk+1, yk-1) and
P2k+1=p2k +2 ry2xk+1-2 rx2yk+1+ rx2
Using the same incremental calculations for x and y as in region 1.
6. Determine symmetry point in the other three quadrants.
7. Move each calculated pixel position(x,y) onto the elliptical path centered on (xc,
yc) and plot the coordinate values
x=x+xc, y=y+yc

PREPARED BY S.PRABHU AP/CSE KVCET

32
PRABHU.S

PREPARED BY S.PRABHU AP/CSE KVCET

33
PRABHU.S

A plot of the selected positions around the ellipse boundary within the first
quadrant is shown in Fig. 3-23.

ATTRIBUTES OF OUTPUT PRIMITIVES


 Any parameter that affects the way a primitive is to be displayed is referred to as
an attribute parameter.
 Some attribute parameters, such as

color and
size
 Which determine the fundamental characteristics of a primitive.
 Others specify how the primitive is to be displayed under special conditions.
 For example, lines can be

dotted
or dashed,
fat or thin, and
blue or orange.

PREPARED BY S.PRABHU AP/CSE KVCET

34
PRABHU.S

LINE ATTRIBUTES
 Basic attributes of a straight line segment are its

type,
its width, and
its color.
 In some graphics packages, lines can also be displayed using selected pen or brush
options

Line Type
 Possible selections for the line-type attribute include

solid lines,
dashed lines,
and dotted lines
 We modify a line drawing algorithm to generate such lines by setting the length
and spacing of displayed solid sections along the line path.

 To set line type attributes in a program, a user invokes the function


setLineType(lt);
 where parameter lt is assigned a positive integer value of 1,2,3, or 4 to generate
lines that are, solid, dashed, dotted, or dash-dotted respectively.

PREPARED BY S.PRABHU AP/CSE KVCET

35
PRABHU.S

Solid Line
Dotted Line
Dashed Line
Dash-Dotted Line

Line Width
 Implementation of line-width options depends on the capabilities of the output
device.
 A heavy line on video monitor could bc displayed as adjacent parallel lines.
 Where as a pen plotter mght require pen changes.
 Line-width command is used to set the current line-width value in the attribute list.
 This value is then used by line-drawing algorithms to Control the thIckness of
lines
 We set the line-wdth attribute with the command:
SetLinewidthScaleFactor(lw);

PREPARED BY S.PRABHU AP/CSE KVCET

36
PRABHU.S

 lw is assigned a positive number to indicate the relative width of the line to be


displayed.
 A value of 1 specifies a standard-width line.
 Value greater then 1 produce lines thicker than the standard.
 For raster implementation, a standard-width line is generated with single pixels at
each sample position, as in the Bresenham algorithm.
 Other-width lines are displayed by plotting additional pixels along adjacent
parallel line paths.
 Other methods for producing thick Lines include displaying the line as a filled
rectangle or generating the line with a selected pen or brush pattern,

Pen and Brush Options


 With some packages, lines can be displayed with pen or brush selections.
 Options in this category include

shape,
size, and
pattern.
 Some possible pen or brush shapes are given in folwing figure.

PREPARED BY S.PRABHU AP/CSE KVCET

37
PRABHU.S

Line Color
 When a system provides color (or intensity) options, a parameter giving the current
color index is included in the list of system-attribute values.
 A polyline routine displays a line in the current color by setting this color value in
the frame buffer at pixel locations along the line path using the setpixel procedure.
 The number of color choices depends on the number of bits available per pixel in
the frame buffer.
 The function is
SetPolylineColorIndex(lc)
 lc represents integer values represents the color parameter.

PREPARED BY S.PRABHU AP/CSE KVCET

38
PRABHU.S

CURVE ATTRIBUTES
 Parameters for curve attributes are the same as those for line segments.
 We can display curves with varying colors, widths, dot-dash patterns, and
available pen or brush options.
 Methods for adapting curve-drawing algorithms to accommodate attribute
selections are similar to those for line drawing.
 Method for displaying thick curves is to fill in the area between two parallel curve
paths, whose separation distance is equal to the desired width.

COLOR AND GRAYSCALE LEVELS


 Various color and intensity-level options can be made available to a user,
depending of a particular system.
 Options are numerically coded with values ranging from 0 through the positive
integers.
 For CRT monitors, these color codes are then converted to intensitylevel settings
for the electron beams.
 color-information can be stored in the frame buffer in two ways:

We can store color codes directly in the frame buffer, or


we can put the color codes in a separate table and use pixel values as an
index into this table
Direct storage scheme
 With the direct storage scheme, whenever a particular color code is specified in an
application program, the corresponding binary value is placed in the frame buffer
for each-component pixel in the output primitives to be displayed in that color.

PREPARED BY S.PRABHU AP/CSE KVCET

39
PRABHU.S

 A minimum number of colors can be provided in this scheme with 3 bits of storage
per pixel, as shown in Table.

 Each of the three bit positions is used to control the intensity level (either on or
off) of the corresponding electron gun in an RGB monitor.

The leftmost bit controls the red gun,


the middle bit controls the green gun, and
the rightmost bit controls the blue gun
 Adding more bits per pixel to the frame buffer increases the number of color
choices.
 With 6 bits per pixel, 64 color values are available for each screen pixel.
 With a Resolution of 1024 by 1024, a full-color (24bit per pixel) RGB system
needs 3 mega bytes of storage for the frame buffer.

PREPARED BY S.PRABHU AP/CSE KVCET

40
PRABHU.S

 Color tables are an alternate means for providing extended color capabilities,
without requiring large frame buffers.
 In particular, often use color tables to reduce frame-buffer storage requirements.
Grayscale
 With monitors that have no color capability, color functions can be used in an
application program to set the shades of gray, or grayscale, for displayed
primitives.
 Numeric values over the range from 0 to 1 can be used to specify grayscale levels,
which are then converted to appropriate binary codes for storage in the raster.
 This allows the intensity settings to be easily adapted to systems with differing
grayscale capabilities.
 Lists the specifications for intensity codes for a four-level grayscale system.

 In this example, any intensity input value near 0.33 would be stored as the binary
value 01 in the frame buffer, and pixels with this value would be displayed as dark
gray.
 With 3 bits per pixel, we can accommodate 8 gray levels;
 With 8 bits per pixel would give us 256 shades of gray.

PREPARED BY S.PRABHU AP/CSE KVCET

41
PRABHU.S

AREA-FILL ATTRIBUTES
 Options for filling a defined region include a choice between a solid color or a
patterned fill.
 These fill options can be applied to polygon regions or to areas defined with
curved boundaries.
 In addition, areas can be painted using various brush styles, colors, and
transparency parameters.

Fill Styles
 Areas are displayed with three basic fill styles:

hollow with a color border,


filled with a solid color, or
filled with a specified pattern or design.

 A basic fill style is selected with the function

 Another value for fill style is hatch, which is used to fill an area with selected
hatching patterns-parallel lines or crossed lines.

PREPARED BY S.PRABHU AP/CSE KVCET

42
PRABHU.S

CHARACTER ATTRIBUTES
 The appearance of displayed characters is controlled by attributes such as

font,
size,
color, and orientation.
 Attributes can be set both for entire character strings (text) and for individual
characters defined as marker symbols.
Text Attribute
 There are a great many text options that can be made available to graphics
programmers.
 First of all, there is the choice of font (or typeface).
 which is a set of characters with a particular design style such as

Arial,
Courier,
Impact,
TimesNewRoman, and various special symbol groups.
 The characters in a selected font can also be displayed with assorted underlining
styles

Bold face
Underline

PREPARED BY S.PRABHU AP/CSE KVCET

43
PRABHU.S

Italics
 The corresponding function for setting font is
 SetTextFont();
 Color settings for ,displayed text are stored m the system attribute list.
 SetTextColorIndex(tc)
 Where tc specifies the color code.
 We can adjust text size by scaling the overall dimensions (height and width) of
characters or by scaling only the character width.

PREPARED BY S.PRABHU AP/CSE KVCET

44
PRABHU.S

2D TRANSFORMATION
 The basic geometric transformations are

translation,
rotation, and
scaling.
 Other transformations that are often applied to objects include

reflection and
shear.

Translation
 A translation is applied to an object by repositioning it along a straight-line path
from one coordinate location to another.
 We translate a two-dimensional point by adding translation distances, tx and ty, to
the original coordinate position (x, y) to move the point to a new position ( x ' , y').

 The translation distance pair (tx,ty) is called a translation vector or shift vector.

PREPARED BY S.PRABHU AP/CSE KVCET

45
PRABHU.S

 We can express the translation above equations as a single matrix equation by


using
column vectors to represent coordinate positions and the translation vector.
 We can express the translation equations as a single matrix equation by using
column vectors to represent coordinate positions and the translation vector:

 This allows us to write the two-dimensional translation equations in the matrix


form:

 Translation is a rigid-body transformation that moves objects without deformation.


 That is, every point on the object is translated by the same amount.
 A straight Line segment is translated by applying the transformation equation to
each of the line endpoints and redrawing the line between the new endpoint
positions.
 Polygons are translated by adding the translation vector to the coordinate position
of each vertex and regenerating the polygon using the new set of vertex
coordinates and the current attribute settings.
 Following figure illustrates the application of a specified translation vector to
move an object from one position to another.

PREPARED BY S.PRABHU AP/CSE KVCET

46
PRABHU.S

 Similar methods are used to translate curved objects.


 To change the position of a circle or ellipse, we translate the center coordinates
and redraw the figure in the new location.
 We translate other curves (for example, splines) by displacing the coordinate
positions defining the objects, then we reconstruct the curve paths using the
translated coordinate points.

ROTATION
 A two-dimensional rotation is applied to an object by repositioning it along a
circular path in the xy plane.
 To generate a rotation, we specify a rotation angle and the position (x1,y1) of the
rotation point (or pivot point) about which the object is to be rotated.

PREPARED BY S.PRABHU AP/CSE KVCET

47
PRABHU.S

 Positive values for the rotation angle define counterclockwise rotations about the
pivot point, as in Fig, and negative values rotate objects in the clockwise direction.
 This transformation can also be described as a rotation about a rotation axis that is
perpendicular to the xy plane and passes through the pivot point.
 We first determine the transformation equations for rotation of a point position P
when the pivot point is at the coordinate origin.
 The angular and coordinate relationships of the original and transformed point
positions are shown in Fig.

PREPARED BY S.PRABHU AP/CSE KVCET

48
PRABHU.S

 In this figure, r is the constant distance of the point from the origin, angle is the
original angular position of the point from the horizontal, and is the rotation
angle.
 Using standard trigonometric identities, we can express the transformed
coordinates in terms of angles and as

 The original coordinates of the point in polar coordinates are


2

 Substitute equation 2 in 1,
3

 we can write the rotation equations in the matrix form:

where the rotation matrix is

 When coordinate positions are represented as row vectors instead of column


vectors, the matrix product in rotation equation 4 is transposed so that the
transformed row co ordinate vector [x' y'] is calculated as,

PREPARED BY S.PRABHU AP/CSE KVCET

49
PRABHU.S

SCALING
A scaling transformation alters the size of an object.
 This operation can be carried out for polygons by multiplying the coordinate
values (x, y) of each vertex by scaling factors sx, and sy, to produce the
transformed coordinates (x', y'):

x' = x. sx , y' = y.sy

 Scaling factor sx, scales objects in the x direction, while sy scales in the y direction.
 The transformation equations 5 can also be written in the matrix form:

Or

 Where S is the 2 by 2 scaling matrix in Eq. 6.


 Any positive numeric values can be assigned to the scaling factors sx, and sy.
 Values less than 1 reduce the size of objects; values greater than 1 produce an
enlargement.
 Specifying a value of 1 for both sx, and sy, leaves the size of objects unchanged.
 When sx, and sy, are assigned the same value, a uniform scaling is produced that
maintains relative object proportions.
 Unique values for sx, and sy, result in a differential scaling.

PREPARED BY S.PRABHU AP/CSE KVCET

50
PRABHU.S

 Following figure shows the changing a square (a) into a rectangle (b) with scaling
factors sx, = 2 and sy =1.

 Following figure illustrates scaling a line by assigning the value 0.5 to both sx and
sy, in Eqn 6.
 Both the line length and the distance from the origin are reduced by a factor of 1 /2

MATRIX REPRESENTATIONS AND HOMOGENEOUS COORDINATES


 Many graphics applications involve sequences of geometric transformations.
 An animation, for example, might require an object to be translated and rotated at
each increment of the motion.

PREPARED BY S.PRABHU AP/CSE KVCET

51
PRABHU.S

 In design and picture construction applications, we perform


translations,
rotations, and
scaling
to fit the picture components into their proper positions.
 Here we consider how the matrix representations can be used so that such
transformation
sequences can be efficiently processed.
 The basic transformations can be expressed in the general matrix form
1

 With coordinate positions P and P' represented as column vectors.


 Matrix M1 is a 2 by 2 array containing multiplicative factors, and M2, is a twoelement column matrix containing translational terms.
 For translation, MI is the identity matrix.

COMPOSITE TRANSFORMATIONS
 With the matrix representations of the previous section, we can set up a matrix for
any sequence of transformations as a composite transformation matrix by
calculating the matrix product of the individual transformations.
 Forming products of transformation matrices is often referred to as a
concatenation, or composition, of matrices.

PREPARED BY S.PRABHU AP/CSE KVCET

52
PRABHU.S

HOMOGENEOUS COORDINATES
 The term homogeneous is used in mathematics to refer to the effect of this
representation on Cartesian equations.
 when a Cartesian point (x, y) is converted to a homogeneous representation (xh, yh,
h), equations containing x and y, such as f(x, y) = 0.
 Expressing positions in homogeneous coordinates allows us to represent all
geometric transformation equations as matrix multiplications.
 Coordinates are represented with three-element column vectors, and
transformation operations are written as 3 by 3 matrices.
 For translation, we have

 which we can write in the abbreviated form


3

 with T(tx,ty) as the 3 by 3 translation matrix in the eqn 2.


 Similarly, rotation transformation equations about the coordinate origin are now
written as

Or as

PREPARED BY S.PRABHU AP/CSE KVCET

53
PRABHU.S

 Finally, a scaling transformation relative to the coordinate origin is now expressed


as the matrix multiplication

Or as

OTHER TRANSFORMATIONS

 Basic transformations such as translation, rotation, and scaling are included in


most graphics packages.
 Some packages provide a few additional transformations that are useful in certain
applications.
 Two such transformations are
reflection and
shear.

REFLECTION
 A reflection is a transformation that produces a mirror image of an object.
 The mirror image for a two-dimensional reflection is generated relative to an axis
of reflection by rotating the object 180" about the reflection axis.

PREPARED BY S.PRABHU AP/CSE KVCET

54
PRABHU.S

 We can choose an axis of reflection in the xy plane or perpendicular to the xy


plane.
 When the reflection axis is a line in the xy plane, the rotation path about this axis is
in a plane perpendicular to the xy plane.
 For reflection axes that are perpendicular to the xy plane, the rotation path is in the
xy plane.
 Following are examples of some common reflections.
 Reflection about the line y = 0, the x axis, is accomplished with the transformation
matrix

 This transformation keeps x values the same, but "flips" the y values of coordinate
positions.
 The resulting orientation of an object after it has been reflected about the x axis is
shown in Fig.

PREPARED BY S.PRABHU AP/CSE KVCET

55
PRABHU.S

 A reflection about the y axis flips x coordinates while keeping y coordinates the
same.
 The matrix for this transformation is

 Following illustrates the change in position of an object that has been reflected
about the line x = 0,

 We flip both the x and y coordinates of a point by reflecting relative to an axis that
is perpendicular to the xy plane and that passes through the coordinate origin.
 This transformation, referred to as a reflection relative to the coordinate origin, has
the matrix representation:

 An example of reflection about the origin is shown in Fig.

PREPARED BY S.PRABHU AP/CSE KVCET

56
PRABHU.S

SHEAR
 A transformation that distorts the shape of an object such that the transformed
shape appears as if the object were composed of internal layers that had been
caused to slide over each other is called a shear.
 Two common shearing transformations are those that shift coordinate x values and
those that shift y values.
 An x-direction shear relative to the x axis is produced with the transformation
matrix

PREPARED BY S.PRABHU AP/CSE KVCET

57
PRABHU.S

 which transforms coordinate positions as

 Any real number can be assigned to the shear parameter shx.


 A coordinate position (x, y) is then shifted horizontally by an amount proportional
to its distance (y value) from the x axis (y = 0).
 Setting shx to 2, for example, changes the square in following figure into a
parallelogram.

 Negative values for shx, shift coordinate positions to the left.


 We can generate x-direction shears relative to other reference lines with

 A y-direction shear relative to the line x = x,,+ is generated with the transformation matrix

PREPARED BY S.PRABHU AP/CSE KVCET

58
PRABHU.S

TRANSFORMATION FUNCTIONS
 Separate functions are convenient for simple transformation operations, and a
composite function can provide method for specifying complex transformation
sequences.
 Individual commands for generating the basic transformation matrices are
translate (trans-ateVector, matrixTranslate)
rotate (theta, matrixRotate)
scale (scaleVector, matrixScale)
composeMatrix (matrix2, matrix1, matrixOut)

 Each of these functions produces a 3 by 3 transformation matrix that can then be


used to transform coordinate positions expressed as homogeneous column vectors.
 Parameter translateVector is a pointer to the pair of translation distances tx and ty.
 Similarly, parameter scaleVector specifies the pair of scaling values sx and sy.
 Rotate and scale matrices (matrixTranslate and matrix-Scale) transform with
respect to the coordinate origin.
 A composite transfornation matrix to perform a combination scaling, rotation, and
translation is produced with the function
buildTransformationMatrix (referencepoint, translatevector, theta, scalevector,
matrix)

PREPARED BY S.PRABHU AP/CSE KVCET

59
PRABHU.S

TWO DIMENSIONAL VIEWING


 A graphics package allows a user to specify which part of a defined picture is to be
display& and where that part is to be placed on the display device.
 Transformations from world to device coordinates involve translation, rotation,
and scaling operations, as well as procedures for deleting those parts of the picture
that are outside the limits of a selected display area.
 A world-coordinate area selected for display is called a window.
 An area on a display device to which a window is mapped is called a viewport.
 The window defines what is to be viewed; the viewport defines where it is to be
displayed.

 In general, the mapping of a part of a world-coordinate scene to device coordinates


is referred to as a viewing transformation.
 Sometimes the two-dimensional viewing transformation is simply referred to as
the window-to-viewport transformation or the windowing transformation.

 Following figure illustrates the mapping of a picture section that falls within a
rectangular window onto a designated rectangular viewport.

PREPARED BY S.PRABHU AP/CSE KVCET

60
PRABHU.S

Viewing-Transformation
 Some graphics packages that provide window and viewport operations allow only
standard rectangles.
 But a more general approach is to allow the rectangular window to haw any
orientation.
 In this case, we carry out the viewing transformation in several steps, as indicated
in Fig.

 First, we construct the scene in world coordinates using the output primitives and
attributes.

PREPARED BY S.PRABHU AP/CSE KVCET

61
PRABHU.S

 Next. to obtain a particular orientation for the window, we can set up a twodimensional viewing-coordinate system in the world-coordinate plane, and define
a window in the viewing-coordinate system.
 The viewing coordinate reference frame is used to provide a method for setting up
arbitrary
 Orientations for rectangular windows.
 Once the viewing reference frame is established, we can transform descriptions in
world coordinates to viewing coordinates.
 We then define a viewport in normalized coordinates (in the range from 0 to 1 )
and map the viewing-coordinate description of the scene to normalized
coordinates.
 At the final step, all parts of the picture that he outside the viewport are clipped,
and the contents of the viewport are transferred to device coordinates.
 Following figure i1lustratt.s a rotated viewing-coordinate reference frame and the
mapping to normalized coordinates.

PREPARED BY S.PRABHU AP/CSE KVCET

62
PRABHU.S

WINDOW-TO-VIEWPORT COORDINATE TRANSFORMATION


 Once object descriptions have been transferred to the viewing reference frame, we
choose the window extents in viewing coordinates and select the viewport limits in
normalized coordinates.
 Object descriptions are then transferred to normalized device coordinates.
 We do this using a transformation that maintains the same relative placement of
objects in normalized space as they had in viewing coordinates.
 If a coordinate position is at the center of the viewing window, for instance, it will
be displayed at the center of the viewport.
 Following figure illustrates the window-to-viewport mapping.

 A point at position (xw, yw) in the window is mapped into position (xv, yv) in the
associated viewport.
 To maintain the same relative placement in the viewport as in the window, we
require that

PREPARED BY S.PRABHU AP/CSE KVCET

63
PRABHU.S

 Solving these expressions for the viewport position (xv, yv), we have

 where the scaling factors are

 Above equations can also be derived with a set of transformations that converts the
window area into the viewport area.
 This conversion is performed with the following sequence of transformations:
1. Perform a scaling transformation using a fixed-point position of (xw,yw) that
scales the

window area to the size of the viewport.

2. Translate the scaled window area to the position of the viewport.

PREPARED BY S.PRABHU AP/CSE KVCET

64
PRABHU.S

CLIPPING OPERATIONS
 Generally, any procedure that identifies those portions of a picture that are either
inside or outside of a specified region of space is referred to as a clipping
algorithm, or simply clipping.
 The region against which an object is to clipped is called a clip window.
 For the viewing transformation, we want to display only those picture parts that are
within the window area.
 Everything outside the window is discarded.
 Clipping algorithms can be applied in world coordinates, so that only the contents
of the window interior are mapped to device coordinates.
 Alternatively, the complete world-coordinate picture can be mapped first to device
coordinates, or normalized device coordinates, then clipped against the viewport
boundaries.

 we consider algorithms for clipping the following primitive types


Point Clipping
Line Clipping (straight-line segments)
Area Clipping (polygons)
Curve Clipping
Text Clipping

PREPARED BY S.PRABHU AP/CSE KVCET

65
PRABHU.S

LINE CLIPPING
 Following figure illustrates possible relationships between line positions and a
standard
rectangular clipping region.

 A line clipping procedure involves several parts.


 First, we can test a given line segment to determine whether it lies completely
inside the clipping window.
 If it does not, we try to determine whether it lies completely outside the window.
 Finally, if we cannot identify a line as completely inside or completely outside, we
must perform intersection calculations with one or more clipping boundaries.
 We process lines through the "inside-outside'' tests by checking the line endpoints.
 A line with both endpoints inside all clipping boundaries, such as the line from P1,
to P2, is saved.
 A line with both endpoints outside any one of the clip boundaries (line P3P4 in
above Fig.) is outside the window.

PREPARED BY S.PRABHU AP/CSE KVCET

66
PRABHU.S

 All other lines cross one or more clipping boundaries, and may require calculation of
multiple intersection points.
 To minimize calculations, we try to devise clipping algorithms that can efficiently
identify outside lines and reduce intersection calculations.
 For a line segment with endpoints (x1, y1) and (x2, y2) and one or both endpoints
outside the clipping rectangle.

x = x1 + u(x2 - x1)
y = y1 + u(y2 - y1)

0u1

 The parametric representation could be used to determine values of parameter u for


intersections with the clipping boundary coordinates.
 If the value of u for an intersection with a rectangle boundary edge is outside the
range 0 to 1, the line does not enter the interior of the window at that boundary.
 If the value of u is within the range from 0 to 1, the line segment does indeed cross
into the clipping area.
 This method can be applied to each clipping boundary edge in turn to determine
whether any part of the line segment is to be displayed.
 Line segments that are parallel to window edges can be handled as specia1 cases.
 Clipping line segments with these parametric tests requires a good deal of computation,
and faster approaches to clipping are possible.
 A number of efficient line clippers have been developed.

PREPARED BY S.PRABHU AP/CSE KVCET

67
PRABHU.S

COHEN-SUTHERLAND LINE CLIPPING


 This is one of the oldest and most popular line-clipping procedures.
 Generally, the method speeds up the processing of line segments by performing
initial tests
that reduce the number of intersections that must he calculated.
 Every line end point in a picture is assigned a four-digit binary code, called a
region code,
 That identifies the location of the point relative to the boundaries of the clipping
rectangle.
 Regions are set up in reference to the boundaries as shown in Following fig.

 Each bit position in the region code is used to indicate one of the four relative
coordinate
positions of the point with respect to the clip window:
to the left,
right,
top, or
bottom.
 By numbering the bit positions in the region code as 1 through 4 from right to left,
the co ordinate regions can be correlated with the bit positions as

PREPARED BY S.PRABHU AP/CSE KVCET

68
PRABHU.S

bit 1: left
bit 2: right
bit 3: below
bit 4: above
 A value of 1 in any bit position indicates that the point is in that relative position;
 otherwise, the bit position is set to 0.
 If a point is within the clipping rectangle, the region code is 0000.
 A point that is below and to the left of the rectangle has a region code of 0101.
 Bit values in the region code are determined by comparing endpoint coordinate
values (x, y) to the clip boundaries.
 Bit 1 is set to 1 if xwmin .
 The other three bit values can be determined using similar comparisons.
 For languages in which bit manipulation is possible, region-code bit values can be
determined
with the following two steps:
(1) Calculate differences between endpoint coordinates and clipping boundaries.
(2) Use the resultant sign bit of each difference calculation to set the corresponding
value in
the region code.
Bit 1 is the sign bit of x-xwmin;
Bit 2 is the sign bit of xwmax-x;
bit 3 is the sign bit of y-ywmin;
bit 4 is the sign bit of ywmax-y;
 Once we have established region codes for all line endpoints, we can quickly
determine which lines are completely inside the clip window and which are clearly
outside.

PREPARED BY S.PRABHU AP/CSE KVCET

69
PRABHU.S

 Any lines that are completely contained within the window boundaries have a
region code of 0000 for both endpoints, and we accept these lines.
 Any lines that have a 1 in the same bit position in the region codes for each
endpoint are completely outside the clipping rectangle, and we reject these lines.
 We would discard the line that has a region code of 1001 for one endpoint and a
code of 0101 for the other endpoint.
 Both endpoints of this line are left of the clipping rectangle, as indicated by the 1
in the first bit position of each region code.
 A method that can be used to test lines for total clipping is to perform the logical
and operation with both region codes.
 If the result is not 0000, the line is completely outside the clipping region.
 Lines that cannot be identified as completely inside or completely outside a clip
window by these tests are checked for intersection with the window boundaries.
 As shown in figure, such lines may or may not cross into the window interior.

PREPARED BY S.PRABHU AP/CSE KVCET

70
PRABHU.S

 We begin the clipping process for a line by comparing an outside endpoint to a


clipping boundary to determine how much of the line can be discarded.
 Then the remaining part of the Line is checked against the other boundaries, and
we continue until either the line is totally discarded or a section is found inside the
window.
 We set up our algorithm to check line endpoints against clipping boundaries in the
order left, right, bottom, top.

POLYGON CLIPPING
 To clip polygons, we need to modify the line-clipping procedures.
 A polygon boundary processed with a line clipper may be displayed as a series of
unconnected line segments depending on the orientation of the polygon to the
clipping window.

 What we really want to display is a bounded area after clipping, as in Fig.

PREPARED BY S.PRABHU AP/CSE KVCET

71
PRABHU.S

 For polygon clipping, we require an algorithm that will generate one or more
closed areas that are then scan converted for the appropriate area fill.
 The output of a polygon clipper should be a sequence of vertices that defines the
clipped polygon boundaries.

Sutherland-Hodgeman Polygon Clipping


 We can correctly clip a polygon by processing the polygon boundary as a whole
against each window edge.
 This could be accomplished by processing all polygon vertices against each clip
rectangle boundary in turn.
 Beginning with the initial set of polygon vertices, we could first clip the polygon
against the left rectangle boundary to produce a new sequence of vertices.
 The new set of vertices could then k successively passed to a right boundary
clipper, a bottom boundary clipper, and a top boundary clipper, as in Fig.

PREPARED BY S.PRABHU AP/CSE KVCET

72
PRABHU.S

 At each step, a new sequence of output vertices is generated and passed to the next
window boundary clipper.
 There are four possible cases when processing vertices in sequence around the
perimeter of a polygon.
 As each pair of adjacent polygon vertices is passed to a window boundary clipper,
we make the following tests:
1. If the first vertex is outside the window boundary and the second vertex is
inside, both the intersection point of the polygon edge with the window
boundary and the second vertex are added to the output vertex list.
2. If both input vertices are inside the window boundary, only the second vertex is
added to the output vertex list.
3. If the first vertex is inside the window boundary and the second vertex is
outside, only the edge intersection with the window boundary is added to the
output vertex list.
4. If both input vertices are outside the window boundary, nothing is added to the
output list.
 These four cases are illustrated in following figure for successive pairs of polygon
vertices.

PREPARED BY S.PRABHU AP/CSE KVCET

73
PRABHU.S

 Once all vertices have been processed for one clip window boundary, the output
11st of vertices is clipped against the next window boundary.
 We illustrate this method by processing the area in following figure against the left
window boundary.

 Vertices 1 and 2 are found to be on the outside of the boundary.


 Moving along to vertex 3, which is inside, we calculate intersection and save both
the intersection point and vertex 3.
 Vertices 4 and 5 are determined to be inside, and they also are saved.
 Moving along to vertex 6 from 5, we need to find the intersection and it is saved.
 Using the five saved points, we would repeat the process for the next window
boundary.

PREPARED BY S.PRABHU AP/CSE KVCET

74
PRABHU.S

CURVE CLIPPING
 Areas with curved boundaries can be clipped with methods similar to those
discussed in the line clipping.
 Curve-clipping procedures will involve nonlinear equations, however, and this
requires more processing than for objects with linear boundaries.
 The bounding rectangle for a circle or other curved object can be used first to test
for overlap with a rectangular clip window.
 If the bounding rectangle for the object is completely inside the window, we save
the object.
 If the rectangle is determined to be completely outside the window, we discard the
object.
 In either case, there is no further computation necessary.
 But if the bounding rectangle test fails, we can look for other computation saving
approaches.
 For a circle, we can use the coordinate extents of individual quadrants and then
octants for preliminary testing before calculating curve-window intersections.
 For an ellipse, we can test the coordinate extents of individual quadrants.
 Following figure illustrates circle clipping against a rectangular window.

PREPARED BY S.PRABHU AP/CSE KVCET

75
PRABHU.S

 Similar procedures can be applied when clipping a curved object against a general
polygon clip region.
 On the first pass, we can clip the bounding rectangle of the object against the
bounding rectangle of the clip region.
 If the two regions overlap, we will need to solve the simultaneous line-curve
equations to obtain the clipping intersection points.

TEXT CLIPPING
There are several techniques that can be used to provide text clipping in a graphics
package.
The clipping technique used will depend on the methods used to generate characters
and the requirements of a particular application.
The simplest method for processing character strings relative to a window boundary is
to use the all-or-none string-clipping strategy shown in Fig.

 If all of the string is inside a clip window, we keep it.


 Otherwise, the string is discarded.
 This procedure is implemented by considering a bounding rectangle around the
text pattern.

PREPARED BY S.PRABHU AP/CSE KVCET

76
PRABHU.S

 The boundary positions of the rectangle are then compared to the window
boundaries, and the string is rejected if there is any overlap.
 This method produces the fastest text clipping.
 An alternative to rejecting an entire character string that overlaps a window
boundary is to use the all-or-none character-clipping strategy.
 Here we discard only those characters that are not completely inside the window.

In this case, the boundary limits of individual characters are compared to the window.
 Any character that either overlaps or is outside a window boundary is clipped.
 A final method for handling text clipping is to clip the components of individual
characters.
 We now treat characters in much the same way that we treated lines.
 If an individual character overlaps a clip window boundary, we clip off the parts of
the character that are outside the window.

PREPARED BY S.PRABHU AP/CSE KVCET

77
PRABHU.S

 Outline character fonts formed with line segments can be processed in this way
using a line clipping algorithm.
 Characters defined with bit maps would be clipped by comparing the relative
position of the individual pixels in the character grid patterns to the clipping
boundaries.
EXTERIOR CLIPPING
 we have considered only procedures for clipping a picture to the interior of a region
by eliminating everything outside the clipping region.
 What is saved by these procedures is inside the region.
 In some cases, we want to do the reverse, that is, we want to clip a picture to the
exterior of a specified region.
 The picture parts to be saved are those that are outside the region.
 This is referred to as exterior clipping.
 A typical example of the application of exterior clipping is in multiple window
systems.
 To correctly display the screen windows, we often need to apply both internal and
external clipping.
 Following figure illustrates a multiple window display.

PREPARED BY S.PRABHU AP/CSE KVCET

78
PRABHU.S

 Objects within a window are clipped to the interior of that window.


 When other higher-priority windows overlap these objects, the objects are also
clipped to the exterior of the overlapping windows.
 Exterior clipping is used also in other applications that require overlapping
pictures.
 Examples here include the design of page layouts in advertising or publishing
applications or for adding labels or design patterns to a picture.
 The technique can also be used for combining graphs, maps, or schematics.
 For these applications, we can use exterior clipping to provide a space for an insert
into a larger picture.

PREPARED BY S.PRABHU AP/CSE KVCET

79
PRABHU.S

UNIT II

3D CONCEPTS
 To obtain a display of a three-dimensional scene that has been modeled in world
coordinates.
 we must first set up a coordinate reference for the "camera".
 This coordinate reference defines the position and orientation for the plane of the
camera film.

Which is the plane we want to use to display a view of the objects in the scene?
 Object descriptions are then transferred to the camera reference coordinates and
projected onto the selected display plane.
 We can then display the objects in wireframe (outline) form, as in Fig,

PREPARED BY S.PRABHU AP/CSE KVCET

80
PRABHU.S

rendering techniques to shade the visible


 Or we can apply lighting and surface rendering
surfaces.

Parallel Projection
 One method for generating a view of a solid object is to project points on the
object surface along parallel lines onto the display plane.
 By selecting different viewing positions, we can project visible points on the
object onto the display plane to obtain different two
two-dimensional
dimensional views of the
object, as in Fig.

 In a parallel projection, parallel lines in the world


world-coordinate
coordinate scene project into
parallel
dimensional display plane.
lines on the two-dimensional

PREPARED BY S.PRABHU AP/CSE KVCET

81
PRABHU.S

 This technique is used in engineering and architectural drawings to represent an


object with a set of views that maintain relative proportions of the object.
 The appearance of the solid object can then be reconstructured from the major
views.

Perspective Projection
Perspective : The appearance of things relative to one another as determined by their
distance from the viewer

 Another method for generating a view of a three-dimensional scene is to project points


to the display plane along converging paths.
 This causes objects farther from the viewing position to be displayed smaller than
objects of the same size that are nearer to the viewing position.
 In a perspective projection, parallel lines in a scene that are not parallel to the display
plane are projected into converging lines.
 Scenes displayed using perspective projections appear more realistic, since this is the
way that our eyes and a camera lens form images.
 In the perspective projection view shown in Fig.

PREPARED BY S.PRABHU AP/CSE KVCET

82
PRABHU.S

 Parallel lines appear to converge to a distant point in the background, and distant objects
appear smaller than objects closer to the viewing position.

Depth Cueing
 Depth information is important so that we can easily identify, for a particular
viewing direction, which is the front and which is the back of displayed objects.
 Following figure illustrates the ambiguity that can result when a wireframe object
is displayed without depth information.

 The wireframe representation of the pyramid in


(a) Contains no depth information to indicate whether the viewing direction is
(b) Downward from a position above the apex or
(c) Upward from a position below the base.
 There are several ways in which we can include depth information in the twodimensional representation of solid objects.
 A simple method for indicating depth with wireframe displays is to vary the intensity of
objects according to their distance from the viewing position.
 Following figure shows a wireframe object displayed with depth cueing.

PREPARED BY S.PRABHU AP/CSE KVCET

83
PRABHU.S

 The lines closest to the viewing position are displayed with the highest intensities,
and lines farther away are displayed with decreasing intensities.

PROJECTIONS
 Once world-coordinate descriptions of the objects in a scene are converted to
viewing coordinates, we can project the three-dimensional objects onto the two
dimensional view plane.
 There are two basic projection methods.
Parallel Projection
Perspective Projection

Parallel Projection
 In a parallel projection, coordinate positions are transformed to the view plane
along parallel Line.

PREPARED BY S.PRABHU AP/CSE KVCET

84
PRABHU.S

 A parallel projection preserves relative proportions of objects.


 This is the method used in drafting to produce scale drawings of three-dimensional
objects.
 Accurate views of the various sides of an object are obtained with a parallel projection.
 But this does not give us a realistic representation of the appearance of a 3D
dimensional object.

Orthographic parallel projection.


 We can specify a parallel projection with a projection vector that defines the
direction for the projection lines.
 When the projection is perpendicular to the view plane, we have an orthographic
parallel projection.
 Otherwise, we have an oblique parallel projection.
 Following figure illustrates the two types of parallel projections.

PREPARED BY S.PRABHU AP/CSE KVCET

85
PRABHU.S

 Some graphics packages, such as GL on Silicon Graphics workstations, do not


provide for oblique projections.
 In this package, for example, a parallel projection is specified by simply giving the
boundary edges of a rectangular parallelepiped.
 Orthographic projections are most often used to produce the front, side, and
top view of an object, a s shown in Fig.

PREPARED BY S.PRABHU AP/CSE KVCET

86
PRABHU.S

 Front, side, and rear orthographic projections of an object are called elevations.
 And a top orthographic projection is called plan view.
 Engineering and architectural drawings commonly employ these orthographic
projections, because lengths and angles are accurately depicted and can be
measured from the drawings.

Perspective Projection
 For a perspective projection, object positions are transformed to the view plane
along lines that converge to a point called the projection reference point (or center
of projection).

 A perspective projection, on the other hand, produces realistic views but does not
preserve relative proportions.
 Projections of distant objects are smaller than the projections of objects of the
same size that are closer to the projection plane

PREPARED BY S.PRABHU AP/CSE KVCET

87
PRABHU.S

 The projected view of an object is determined by calculating the intersection of the


projection lines with the view plane.

 To obtain a perspective projection of a three-dimensional object, we transform


points along projection lines that meet at the projection reference point.
 Suppose we set the projection reference point at position zprp along the zv axis, and
we place the view plane at as shown in Fig.

PREPARED BY S.PRABHU AP/CSE KVCET

88
PRABHU.S

 We can write equations describing coordinate positions along this perspective


projection line in parametric form as

 Parameter u takes values from 0 to 1.


 Coordinate position (x', y', z') represents any point along the projection line.
 When u = 0, we are at position P = (x , y, z).
 At the other end of the line, u = 1 and we have the projection reference point
coordinates (0, 0, zprp).
 On the view plane, z' = zprp and we can solve the z' equation for parameter u at this
position along the projection line:

PREPARED BY S.PRABHU AP/CSE KVCET

89
PRABHU.S

3D REPRESENTATION
 Representation schemes for solid objects are often divided into two broad
categories,
1. Boundary representations
2. Space-partitioning representation

Boundary representations
Boundary representations (B-reps) describe a three-dimensional object as a set of
surfaces that separate the object interior from the environment.
Typical examples of boundary representations are polygon facets and spline patches.

Space-partitioning representation
Space-partitioning representations are used to describe interior properties, by
partitioning the spatial region containing an object into a set of small, non
overlapping, contiguous solids (usually cubes).
A common space-partitioning description for a three-dimensional object is an octree
representation.

POLYGON SURFACES
 The most commonly used boundary representation for a three-dimensional
graphics object is a set of surface polygons that enclose the object interior.
 Many graphics systems store all object descriptions as sets of surface polygons.
 This simplifies and speeds up the surface rendering and display of objects, since all
surfaces are described with linear equations.
 For this reason, polygon descriptions are often referred to as "standard graphics
objects."

PREPARED BY S.PRABHU AP/CSE KVCET

90
PRABHU.S

 In some cases, a polygonal representation is the only one available, but many
packages allow objects to be described with other schemes, such as spline surfaces,
that are then converted to polygonal representations for processing.
 A polygon representation for a polyhedron precisely defines the surface features of
the object.
 But for other objects, surfaces are tesselated (or tiled) to produce the polygonmesh approximation.
 Following figure shows Wireframe representation of a cylinder with back (hidden
lines removed).

 Such representations are common in design and solid-modeling applications,


since the wireframe outline can be displayed quickly to give a general
indication of the surface structure.
 Realistic renderings are produced by interpolating shading patterns across the
polygon surfaces to eliminate or reduce the presence of polygon edge
boundaries.
 And the polygon-mesh approximation to a curved surface can be improved by
dividing the surface into smaller polygon facets.

PREPARED BY S.PRABHU AP/CSE KVCET

91
PRABHU.S

Polygon Tables
 We specify a polygon surface with a set of vertex coordinates and associated
attribute
parameters.
 An information for each polygon are placed into tables that are to be used in the
subsequent processing, display, and manipulation of the objects in a scene.
 Polygon data tables can be organized into two groups:
1. geometric tables and
2. attribute tables.

Geometric tables
 It contain vertex coordinates and parameters to identify the spatial orientation of
the polygon surfaces.

Attribute tables
 It includes parameters specifying the degree of transparency of the object and its
surface reflectivity and texture characteristics.
 A convenient organization for storing geometric data is to create three lists:
1. a vertex table,
2. an edge table, and
3. a polygon table.

Vertex table
 Coordinate values for each vertex in the object are stored in the vertex table.

PREPARED BY S.PRABHU AP/CSE KVCET

92
PRABHU.S

Edge table
 The edge table contains pointers back into the vertex table to identify the vertices
for each polygon edge.

Polygon table
 The polygon table contains pointers back into the edge table to identify the edges
for
each polygon.
 This scheme is illustrated in Fig for two adjacent polygons on an object surface.

tesselated : Fit together exactly, of identical shapes

PREPARED BY S.PRABHU AP/CSE KVCET

93
PRABHU.S

 In addition, individual objects and their component polygon faces can be assigned
object and facet identifiers for easy reference.

Plane Equations
 To produce a display of a three-dimensional object, we must process the input data
representation for the object through several procedures.
 These processing steps include transformation of the modeling and world-coordinate
descriptions to viewing coordinates, then to device coordinates; identification of visible
surfaces; and the application of surface-rendering procedures.
 For some of these processes, we need information about the spatial orientation of
the individual surface components or the object.
 This information is obtained from the vertex coordinate values and the equations
that describe the polygon planes.
 The equation for a plane surface can be expressed in the form

AX + BY + CZ + D = 0

 where (x, y, z ) in any point on the plane, and the coefficients A, B, C, and D are
 constants describing the spatial properties of the plane.
 We can obtain the values of A, B, C, and D by solving a set of three plane
equations.
 To solve the following set of simultaneous linear plane equations for the ratios
A/D, B/D,and C/D:

PREPARED BY S.PRABHU AP/CSE KVCET

94
PRABHU.S

 The solution for this set of equations can be obtained in determinant form, using
Cramer's rule, as

 Expanding the determinants, we can write the calculations for the plane
coefficients in the form

POLYGON MESHES
 Some graphics packages provide several polygon functions for modeling objects.
 A single plane surface can be specified with a function such as fillArea.
 But when object surfaces are to be tiled, it is more convenient to specify the
surface facets with a mesh function.

PREPARED BY S.PRABHU AP/CSE KVCET

95
PRABHU.S

Triangle strip
 One type of polygon mesh is the triangle strip.

 This function produces n - 2 connected triangles, .as shown in above figure.


Quadrilateral mesh
 Another similar function is the quadrilateral mesh.

 which generates a mesh of (n - 1) by (m - 1) quadrilaterals, given the coordinates


for an n by m array of vertices.
 Above figure shows 20 vertices forming a mesh of 12 quadrilaterals.
Problem
 When polygons are specified with more than three vertices, it is possible that the
vertices may not all Lie in one plane.

PREPARED BY S.PRABHU AP/CSE KVCET

96
PRABHU.S

 This can be due to numerical errors or errors in selecting coordinate positions for
the vertices.

Solution
 One way to handle this situation is simply to divide the polygons into triangles.
 Another approach that is sometimes taken is to approximate the plane parameters
A, B, and C.
 We can do this with averaging methods or we can project the polygon onto the
coordinate planes.
 Using the projection method, we take
A proportional to the area of the polygon projection on the yz plane,
B proportional to the projection area on the xz plane, and
C proportional to the projection area on the xy plane.

CURVED LINES AND SURFACES


 Displays of three dimensional curved lines and surfaces can be generated from an
input set of mathematical functions defining the objects or from a set of user
specified data points.
 When functions are specified, a package can project the defining equations for a
curve to the display plane and plot pixel positions along the path of the projected
function.

 For surfaces, a functional description is often tesselated to produce a polygonmesh approximation to the surface.

PREPARED BY S.PRABHU AP/CSE KVCET

97
PRABHU.S

 Usually, this is done with triangular polygon patches to ensure that all vertices of
any polygon are in one plane.
 Polygons specified with four or more vertices may not have all vertices in a single
plane.
 Curve and surface equations can be expressed in either a parametric or a non
parametric form.

QUADRIC SUKFACES
 A frequently used class of objects are the quadric surfaces, which are described
with second-degree equations (quadratics).
 They include
spheres,
ellipsoids,
tori,
paraboloids, and
hyperboloids.
 Quadric surfaces, particularly spheres and ellipsoids, are common elements of
graphics scenes, and they are often available in graphics packages as primitives
from which more complex objects can be constructed.

SPHERE
 In Cartesian coordinates, a spherical surface with radius r centered on the
coordinate origin is defined as the set of points (x, y, z) that satisfy the equation

PREPARED BY S.PRABHU AP/CSE KVCET

98
PRABHU.S

 We can also describe the spherical surface in parametric form, using latitude and
longitude angles.

 The above figure shows the Parametric coordinate position (r, ,) on the surface
of a sphere with radius r.

ELLIPSOID
 An ellipsoidal surface can be described as an extension of a spherical surface,
where the radii in three mutually perpendicular directions can have different
values.

PREPARED BY S.PRABHU AP/CSE KVCET

99
PRABHU.S

 The Cartesian representation for points over the surface of an ellipsoid centered on
the origin is

 And a parametric representation for the ellipsoid in terms of the latitude angle
and the longitude angle in eqn 2.

SPLINE
 A spline is a flexible strip used to produce a smooth curve through a designated set
of points.
 Several small weights are distributed along the length of the strip to hold it in
position on the drafting table as the curve is drawn.
 The term spline curve originally referred to a curve drawn in this manner.
 In computer graphics, the term spline curve refers to any composite curve formed
with polynomial sections satisfying specified continuity conditions at the boundary
of the pieces.

PREPARED BY S.PRABHU AP/CSE KVCET

100
PRABHU.S

 Splines are used in graphics applications to design curve and surface shapes, to
digitize drawings for computer storage, and to specify the animation paths for the
objects or the camera in a scene.
 Typical CAD applications for splines include the design of automobile bodies,
aircraft and spacecraft surfaces, and ship hulls.

 The above figure shows the set of six control points interpolated with piecewise
continuous polynomial.

Spline Specifications
 There are three equivalent methods for specifying a particular spline
representation:
1. We can state the set of boundary conditions that are imposed on the spline; or
2. We can state the matrix that characterizes the spline; or
3. We can state the set of blending functions (or basis functions) that determine
how specified geometric constraints on the curve are combined to calculate
positions along the curve path.

PREPARED BY S.PRABHU AP/CSE KVCET

101
PRABHU.S

VISUALIZATION OF DATA SETS


 The use of graphical methods as an aid in scientific and engineering analysis is
commonly referred to as scientific visualization.
 This involves the visualization of data sets and processes that may be difficult or
impossible to analyze without graphical methods.
 For example, visualization techniques are needed to deal with the output of highvolume data sources such as
supercomputers,
satellite, and spacecraft scanners,
radio-astronomy telescopes, and
medical scanners.
 Similar methods employed by commerce, industry, and other nonscientific areas
are sometimes referred to as business visualization.
 Data sets are classified according to their spatial distribution and according to data
type.
 Two-dimensional data sets have values distributed over a surface, and threedimensional data sets have values distributed over the interior of
a cube,
a sphere, or
some other region of space.
 Data types include
scalars,
vectors,
tensors, and
multivariate data.

PREPARED BY S.PRABHU AP/CSE KVCET

102
PRABHU.S

Visual Representations for Scalar Fields


 A scalar quantity is one that has a single value.
 Scalar data sets contain values that may be distributed in time, as well as over
spatial positions.
 Also, the data values may be functions of other scalar parameters.
 Some examples of physical scalar quantities are
energy,
density,
mass,
temperature,
pressure,
charge,
resistance,
reflectivity, and
frequency.
 A common method for visualizing a scalar data set is to use graphs or charts that
show the distribution of data values.
 If the data are distributed over a surface, we could plot the data values as vertical
bars rising up from the surface, or we can interpolate the data values to display a
smooth surface.

Pseudo-color methods
 Pseudo-color methods are also used to distinguish different values in a scalar data
set, and color-coding techniques can be combined with graph and chart methods.
 To color code a scalar data set, we choose a range of color and map the range of
data values to the color range.

PREPARED BY S.PRABHU AP/CSE KVCET

103
PRABHU.S

 For example, blue could be assigned to the lowest scalar value, and red could be
assigned to the highest value.
 Following figure gives an example of a color-coded surface plot.

 Color coding a data set can be tricky, because some color combinations can lead to
misinterpretations of the data.
 Contour plots are used to display isolines (lines of constant scalar value) for a
data set distributed over a surface.
 The isolines are spaced at some convenient interval to show the range and
variation of the data values over the region of space.
 The isolines are usually plotted as straight-line sections across each cell, as
illustrated in Fig.

PREPARED BY S.PRABHU AP/CSE KVCET

104
PRABHU.S

Visual Representations for Vector Fields


 A vector quantity V in three-dimensional space has three scalar values ( Vx , Vy,
Vz) one for each coordinate direction, and a two-dimensional vector has two
components (Vx, Vy).
 Another way to describe a vector quantity is by giving its magnitude |V| and its
direction as a unit vector u.
 As with scalars, vector quantities may be functions of position, time, and other
parameters.
 Some examples of physical vector quantities are
velocity,
acceleration,
force,
electric fields,
magnetic fields,

gravitational fields, and

electric current.

 One way to visualize a vector field is to plot each data point as a small arrow that
shows the magnitude and direction of the vector.
 This method is most often used with cross-sectional slices, as in Fig.

PREPARED BY S.PRABHU AP/CSE KVCET

105
PRABHU.S

 Magnitudes for the vector values can be shown by varying the lengths of the
arrows, or we can make all arrows the same size, but make the arrows different
colors according to a selected color coding for the vector magnitudes.
 We can also represent vector values by plotting field lines or streamlines.
 Field lines are commonly used for electric, magnetic, and gravitational fields.
 The magnitude of the vector values is indicated by the spacing between field lines, and
the direction is the tangent to the field, as shown in Fig

 Streamlines can be displayed as wide arrows.

PREPARED BY S.PRABHU AP/CSE KVCET

106
PRABHU.S

Visual Representations for Tensor Fields


 A tensor quantity in three-dimensional space has nine components and can be
represented with a 3 by 3 matrix.
 Actually, this representation is used for a second-order tensor, and higher-order
tensors do occur in some applications, particularly general relativity.
 Some examples of physical, second-order tensors are stress and strain in a material
subjected to external forces, conductivity (or resistivity) of an electrical conductor,
and the metric tensor, which gives the properties of a particular coordinate space.
 The stress tensor in Cartesian coordinates, for example, can be represented as

 Tensor quantities are frequently encountered in anisotropic materials, which have


different properties in different directions.

Visual Representations for Multivariate Data Fields


 In some applications, at each grid position over some region of space, we may
have multiple data values.
 Which can be a mixture of scalar, vector, and even tensor values.
 As an example, for a fluid-flow problem, we may have fluid velocity, temperature,
and density values at each three-dimensional position.
 Thus, we have five scalar values to display at each position, and the situation is
similar to displaying a tensor field.

PREPARED BY S.PRABHU AP/CSE KVCET

107
PRABHU.S

 A method for displaying multivariate data fields is to construct graphical objects,


sometimes referred to as glyphs, with multiple parts.
 Each part of a glyph represents a physical quantity.
 The size and color of each part can be used to display information about scalar
magnitudes.
 To give directional information for a vector field, we can use a wedge, a cone, or
some other pointing shape for the glyph part representing the vector.
 An example of the visualization of a multivariate data field using a glyph structure
at selected grid positions is shown in

PREPARED BY S.PRABHU AP/CSE KVCET

108
PRABHU.S

3D TRANSFORMATION
 Methods for geometric transformations and object modeling in three dimensions
are extended from two-dimensional methods by including considerations for the z
coordinate.

TRANSLATION
 In a three-dimensional homogeneous coordinate representation, a point is
translated
from position P = (x, y, z) to position P' = (x', y', z') with the matrix operation

Or

PREPARED BY S.PRABHU AP/CSE KVCET

109
PRABHU.S

 Parameters tx, ty, and tz, specifying translation distances for the coordinate directions x,
y, and z, are assigned any real values.
 The matrix representation in Eq.1 is equivalent to the three equations

 An object is translated in three dimensions by transforming each of the defining


points of the object.
 For an object represented as a set of polygon surfaces, we translate each vertex of
each surface and redraw the polygon facets in the new position.
 We obtain the inverse of the translation matrix in Eq.1 by negating the translation
distances tx, ty, and tz.
 This produces a translation in the opposite direction, and the product of a translation
matrix and its inverse produces the identity matrix.

ROTATION
 To generate a rotation transformation for an object, we must designate an axis of
rotation (about which the object is to be rotated) and the amount of angular
rotation.
 Unlike two-dimensional applications, where all transformations are carried out in
the xy plane, a three-dimensional rotation can be specified around any line in
space.
 Following figures illustrate Positive rotation directions about the coordinate axes
are
counterclockwise, when looking toward the origin from a positive coordinate
position on each axis.

PREPARED BY S.PRABHU AP/CSE KVCET

110
PRABHU.S

Coordinate-Axes Rotations
 The two-dimensional z-axis rotation equations are easily extended to three
dimensions:

 Parameter specifies the rotation angle.


 In homogeneous coordinate form, the three-dimensional z-axis rotation equations are
expressed as

 which we can write more compactly as

 Following figure illustrates rotation of an object about the z axis.

PREPARED BY S.PRABHU AP/CSE KVCET

111
PRABHU.S

 Transformation equations for rotations about the other two coordinate axes can be obtained
with a cyclic permutation of the coordinate parameters x, y and in Eqs.1.
 That is, we use the replacements
3

 as illustrated in following fig.

 Substituting permutations 3 in Eqs. 1, we get the equations for an x-axis rotation:

 Which can be written in the homogeneous coordinate form

PREPARED BY S.PRABHU AP/CSE KVCET

112
PRABHU.S

 Cyclic permutation of the Cartesian-coordinate axes to produce the three sets of


coordinate axis rotation equations.

SCALING
 The matrix expression tor the scaling transformation of a position P = (x, y, z)
relative
to the coordinate origin can be written as

Or
2

 Where scaling parameters sx, sy, and sz are assigned any positive values.
 Explicit expressions for the coordinate transformations for scaling relative to the
origin are

 Scaling an object with transformation Eqn1 changes the size of the object and
repositions the object relative to the coordinate origin.
 Also, if the transformation parameters are not all equal, relative dimensions in the
object are changed.

PREPARED BY S.PRABHU AP/CSE KVCET

113
PRABHU.S

 We preserve the original shape of an object with a uniform scaling (sx =sy = sz).
 The result of scaling an object uniformly with each scaling parameter set to 2 is
shown in Fig.

 Scaling with respect to a selected fixed position (xf, yf, zf) can be represented with
the following transformation sequence:

1. Translate the fixed point to the origin.


2. Scale the object relative to the coordinate origin using Eq1.
3. Translate the fixed point back to its original position.
 This sequence of transformations is demonstrated in following fig.

PREPARED BY S.PRABHU AP/CSE KVCET

114
PRABHU.S

 The matrix representation for an arbitrary fixed-point scaling can then be


expressed as the concatenation of these translate-scale-translate transformations as

 We form the inverse scaling matrix for either Eqn1 or Eqn3 by replacing the
scaling parameters
sx, sy and sz with their reciprocals.
 The inverse matrix generates an opposite scaling transformation, so the
concatenation of any
scaling matrix and its inverse produces the identity matrix.

OTHER TRANSFORMATIONS
 In addition to translation, rotation, and scaling, there are various additional
transformations
that are often useful in three-dimensional graphics applications.
 Two of these are
reflection and
shear.

PREPARED BY S.PRABHU AP/CSE KVCET

115
PRABHU.S

REFLECTIONS
 A three-dimensional reflection can be performed relative to a selected reflection
axis or with respect to a selected reflection plane.
 In general, three-dimensional reflection matrices are set up similarly to those for
two dimensions.
 Reflections relative to a given axis are equivalent to 1800 rotations about that axis.
 Reflections with respect to a plane are equivalent to 180' rotations in fourdimensional space.
 When the reflection plane is a coordinate plane (either xy, xz, or yz), we can think
of the transformation as a conversion between Left-handed and right-handed
systems.
 An example of a reflection that converts coordinate specifications from a righthanded system
to a left-handed system (or vice versa) is shown in Fig.

 This transformation changes the sign of the z coordinates, leaving the x and ycoordinate
values unchanged.
 The matrix representation for this reflection of points relative to the xy plane is

PREPARED BY S.PRABHU AP/CSE KVCET

116
PRABHU.S

 Transformation matrices for inverting x and y values are defined similarly, as


reflections relative
to the yz plane and xz plane, respectively.
 Reflections about other planes can be obtained as a combination of rotations and
coordinate-plane reflections.

SHEARS
 Shearing transformations can he used to modify object shapes.
 They are also useful in three-dimensional viewing for obtaining general projection
transformations.
 In two dimensions, we discussed transformations relative to the x or y axes to
produce distortions in the shapes of objects.
 In three dimensions, we can also generate shears relative to the z axis.
 As an example of three-dimensional shearing. the following transformation
produces a z-axis shear:

 Parameters a and b can be assigned any real values.


 The effect of this transformation matrix is to alter x- and y-coordinate values by an
amount
that is proportional to the z value, while leaving the z coordinate unchanged.

PREPARED BY S.PRABHU AP/CSE KVCET

117
PRABHU.S

 Boundaries of planes that are perpendicular to the z axis are thus shifted by an
amount proportional to z.
 An example of the effect of this shearing matrix on a unit cube is shown in Fig, for
shearing values a = b =1.

 Shearing matrices for the x axis and y axis are defined similarly.

VIEWING PIPELINE
 The steps for computer generation of a view of a three-dimensional scene are
somewhat analogous to the processes involved in taking a photograph.
 To take a snapshot, we first need to position the camera at a particular point in
space.
 Then we need to decide on the camera orientation (in Fig).

PREPARED BY S.PRABHU AP/CSE KVCET

118
PRABHU.S

 Finally, when we snap the shutter, the scene is cropped to the size of the "window"
(aperture) of the camera, and light from the visible surfaces is projected onto the
camera film.
 Following figure shows the general processing steps for modeling and converting a
world-coordinate description of a scene to device coordinates.

 Once the scene has been modeled, world-coordinate positions are converted to
viewing coordinates.
 The viewing-coordinate system is used in graphics packages as a reference for
specifying the observer viewing position and the position of the projection plane,
which we can think of in analogy with the camera film plane.
 Next, projection operations are performed to convert the viewing-coordinate
description of the scene to coordinate positions on the projection plane, which will
then be mapped to the output device.

PREPARED BY S.PRABHU AP/CSE KVCET

119
PRABHU.S

 Objects outside the specified viewing limits are clipped h m further consideration,
and the remaining objects are processed through visible-surface identification and
surface-rendering procedures to produce the display within the device viewport.

VIEWING COORDINATES
 Generating a view of an object in three dimensions is similar to photographing the
object.
 We can walk around and take its picture from any angle, at various distances, and
with varying camera orientations.
 Whatever appears in the viewfinder is projected onto the flat film surface.
 The type and size of the camera lens determines which parts of the scene appear in
the final picture.
 These ideas are incorporated into three dimensional graphics packages so that
views of
a scene can be generated, given the spatial position, orientation, and aperture size
of the "camera".
 To obtain a series of views of a scene, we can keep the view reference point fixed
and change the direction of N, as shown in Fig.

PREPARED BY S.PRABHU AP/CSE KVCET

120
PRABHU.S

 This corresponds to generating views as we move around the viewing-coordinate


origin.
 In interactive applications, the normal vector N is the viewing parameter that is
most often changed.
 By changing only the direction of N, we can view a scene from any direction
except along the line of V.
 To obtain either of the two possible views along the line of V, we would need to
change the direction of V.
Transformation from World to Viewing Coordinates

1. Translate the view reference point to the origin of the world-coordinate system.

PREPARED BY S.PRABHU AP/CSE KVCET

121
PRABHU.S

2. Apply rotations to align the xv, yv, and zv axes with the world xw, yw, and zw axes,
respectively.

 If the view reference point is specified at world position (xo yo, zo), this point is
translated to the world origin with the matrix transformation

PREPARED BY S.PRABHU AP/CSE KVCET

122
PRABHU.S

VISIBLE SURFACE DETECTION


 A major consideration in the generation of realistic graphics displays is identifying
those parts of a scene that are visible from a chosen viewing position.
 There are many approaches we can take to solve this problem, and numerous
algorithms have used for different types of applications.
 Some methods require more memory, some involve more processing time, and
some apply only to special types of objects.
 The various algorithms are referred to as visible-surface detection methods.
 Sometimes these methods are also referred to as hidden-surface elimination
methods.

1. Back-face detection
2. Depth-buffer method
3. A-buffer method
4. Scan-line method
5. Depth-sorting method
6. BSP-tree method
7. Area-subdivision b1ethod
8. Octree methods
9. Ray-casting method
10. Curved surfaces
11. wireframe methods

PREPARED BY S.PRABHU AP/CSE KVCET

123
PRABHU.S

BACK-FACE DETECTION
 A fast and simple object-space method for identifying the back faces of a
polyhedron is based on the "inside-outside" tests.
 A point (x, y, z) is "inside" a polygon surface with plane parameters A, B, C, and
D if

 When an inside point is along the line of sight to the surface, the polygon must be
a back face (we are inside that face and cannot see the front of it from our viewing
position).
 We can simplify this test by considering the normal vector N to a polygon surface,
which has Cartesian components (A, B, C).
 In general, if V is a vector in the viewing direction from the eye (or "camera")
position, as shown in Fig,

 then this polygon is a back face if

PREPARED BY S.PRABHU AP/CSE KVCET

124
PRABHU.S

DEPTH-BUFFER METHOD
 A commonly used image-space approach to detecting visible surfaces is the depthbuffer method.
 Which compares surface depths at each pixel position on the projection plane?
 This procedure is also referred to as the z-buffer method.
 Since object depth is usually measured from the view plane along the z axis of a
viewing system.
 Each surface of a scene is processed separately, one point at a time across the
surface.
 The method is usually applied to scenes containing only polygon surfaces, because
depth values can be computed very quickly and the method is easy to implement.
 But the method can be applied to non planar surfaces.
 With object descriptions converted to projection coordinates, each (x, y, z )
position on a polygon surface corresponds to the orthographic projection point (x,
y) on the view plane.
 Therefore, for each pixel position (x, y) on the view plane, object depths can be
compared by comparing z values.
 Following figure shows three surfaces at varying distances along the orthographic
projection line from position (x, y) in a view plane taken as the xv, yv plane.
 Surface S1, is closest at this position, so its surface intensity value at (x, y) is saved
 As implied by the name of this method, two buffer areas are required.
 A depth buffer is used to store depth values for each (x, y) position as surfaces are
 processed, and the refresh buffer stores the intensity values for each position.

PREPARED BY S.PRABHU AP/CSE KVCET

125
PRABHU.S

 Initially, all positions in the depth buffer are set to 0 (minimum depth), and the
refresh
 buffer is initialized to the background intensity.
 Each surface listed in the polygon tables is then processed, one scan line at a time,
calculating the depth (z value) at each (x, y) pixel position.
 The calculated depth is compared to the value previously stored in the depth buffer
at that position.
 If the calculated depth is p a t e r than the value stored in the depth buffer, the new
depth value is stored, and the surface intensity at that position is determined and in
the same xy location in the refresh buffer.
 We summarize the steps of a depth-buffer algorithm as follows:

PREPARED BY S.PRABHU AP/CSE KVCET

126
PRABHU.S

A-Buffer
 An extension of the ideas in the depth-buffer method is the A-buffer method.
 The A- buffer method represents an antialiased, area-averaged, accumulationbuffer method.
 A drawback of the depth-buffer method is that it can only find one visible surface
at each pixel position.
 In other words, it deals only with opaque surfaces and cannot accumulate intensity
values for more than one surface, as is necessary if transparent surfaces are to be
displayed .
 The A-buffer method expands the depth buffer so that each position in the buffer
can reference a linked list of surfaces.
 Thus, more than one surface intensity can be taken into consideration at each pixel
position, and object edges can be ant aliased.

PREPARED BY S.PRABHU AP/CSE KVCET

127
PRABHU.S

UNIT III

COLOR MODELS
 A color model is a method for explaining the properties or behavior of color within
some particular context.

Chromaticity diagram:
 Chromaticity diagram is a convenient space coordinator representation of all the
colors and the mixture of colors.

How colors are represented here:


 The various colors are represented along the perimeter of the curve.
 The corner representing the 3 primary colors.

Uses of Chromaticity Diagram


 Comparing color gamuts for different sets of primaries.
 Identifying complementary colors.
 Determining dominant wavelength and purity of a given color.

PREPARED BY S.PRABHU AP/CSE KVCET

128
PRABHU.S

Hue
 This is the predominant spectral color of the received light.
 The color itself is its hue or tint.
 Green leaves have a green hue, red apple has a red hue.
Saturations:
 This is the spectral purity of the color light.
 Saturated colors are vivid, intense, deep
RGB COLOR MODEL
 In this color model, the three primaries Red, Green and Blue are used.
 Here color is expressed as,
C = RR + CG + BB
 We can represent this model in unit cube as shown in following figure,

PREPARED BY S.PRABHU AP/CSE KVCET

129
PRABHU.S

 The origin represents black


 The vertex with coordinates (1, 1, 1) is white.
 The magenta vertex is obtained by adding red and blue.
 The yellow vertex is obtained by adding green and red and so on.
Additive model
 Intensities of the primary colors are added to produce other colors.
 Each color point within the bounds of the cube can be represented as the triple
(R, G, B)
 Where values for R, G, and B are assigned in the range from 0 to 1.
 Shades of gray are represented along the main diagonal of the cube from the
origin (black) to the white vertex.

YIQ COLOR MODEL


 In the YIQ color model, luminance (brightness) information is contained in the Y
parameter, while chromaticity information (hue and purity) is incorporated into the
1 and Q parameters.
 A combination of red, green, and blue intensities are chosen for the Y parameter to
yield the standard luminosity curve.
 Since Y contains the luminance information, black-and-white television monitors
use only the Y signal.
 The largest bandwidth in the NTSC video signal (about 4 MHz) is assigned to the
Y information.
 Parameter I contains orange-cyan hue information that provides the flesh-tone
shading, and occupies a bandwidth of approximately 1.5 MHz.

PREPARED BY S.PRABHU AP/CSE KVCET

130
PRABHU.S

 Parameter Q carries green-magenta hue information in a bandwidth of about 0.6


MHz.
 An RGB signal can be converted to a television signal using an NTSC encoder,
which converts RGB values to YIQ values.
 Then modulates and superimposes the I and Q information on the Y signal.
 The conversion from RGB values to YIQ values is accomplished with the
transformation.

NTSC signals
 An NTSC video signal can be converted to an RGB signal using an NTSC
decoder.
 Which separates the video signal into the YIQ components, then converts to RGB
values.
 We convert from YIQ space to RGB space with the inverse matrix transformation
RGB into YIQ
 An RGB signal can be converted to a television signal using an NTSC encoder
which converts RGB values to YIQ values.
 This conversion from RGB values to YIQ values is accomplished with the
transformation.

YIQ into RGB


 An NTSC video signal can be converted to an RGB signal using an NTSC
decoder.

PREPARED BY S.PRABHU AP/CSE KVCET

131
PRABHU.S

 Which separates the video signal into the YIQ components, then converts to
RGB values.
 We can convert from YIQ space to RGB space with the inverse matrix
transformation.

CMY COLOR MODEL


 In this model cyan, magenta, and yellow (CMY) are used as a primary colors.
 This model is useful for describing color output to hard-copy devices.
Video Monitors Vs Printers, Plotters:
 Video monitors produce a color pattern by combining light from the screen
phosphors.
 Whereas, hard-copy devices such as plotters produce a color picture by coating a
paper with color pigments.

Subtractive process
 It is a subtractive process.
 As we have noted, cyan can be formed by adding green and blue light.
 Therefore, when white light is reflected from cyan-colored ink, the reflected light
must have no red component.
 That is, red light is absorbed, or subtracted, by the ink.
 Similarly, magenta ink subtracts the green component from incident light, and
yellow subtracts the blue component.

PREPARED BY S.PRABHU AP/CSE KVCET

132
PRABHU.S

 A unit cube representation for the CMY model is illustrated in Fig.

 In the CMY model, point (1, 1, 1) represents black,


 Because all components of the incident light are subtracted.
 The origin represents white light.
 Equal amounts of each of the primary colors produce grays, along the main
diagonal of the cube.
 A combination of cyan and magenta ink produces blue light.
 Because the red and green components of the incident light are absorbed.
 Other color combinations are obtained by a similar subtractive process.
Printing Process
 The printing process often used with the CMY model generates a color point with
a collection of four ink dots, (like RGB monitor uses a collection of three phosphor
dots).
Three dots are used for each of the primary colors (cyan, magenta, and
yellow).
And one dot is black.

PREPARED BY S.PRABHU AP/CSE KVCET

133
PRABHU.S

 A black dot is included because the combination of cyan, magenta, and yellow
inks typically produce dark gray instead of black.

Conversion of RGB into CMY


 We can express the conversion from an RGB representation to a CMY
representation
with the matrix transformation

 Where the white is represented in the RGB system as the unit column vector.
Conversion of CMY into RGB
 Similarly, we convert from a CMY color representation to an RGB representation
with the matrix transformation

 Where black is represented In the CMY system as the unit column vector.

PREPARED BY S.PRABHU AP/CSE KVCET

134
PRABHU.S

HSV COLOR MODEL


 Instead of a set of color primaries, the HSV model uses color descriptions that
have a more intuitive appeal to a user.
 To give a color specification, a user selects a spectral color and the amounts of
white and black that are to be added to obtain different shades, tints, and tones.
 Color parameters in this model are hue ( H ), saturation(S), and value (V).
 The three-dimensional representation of the HSV model is derived from the RGB
cube.
 If we imagine viewing the cube along the diagonal from the white vertex to the
origin (black).
 We see an outline of the cube that has the hexagon shape shown in Fig.

 The boundary of the hexagon represents the various hues, and it is used as the top
of the HSV hexcone.

PREPARED BY S.PRABHU AP/CSE KVCET

135
PRABHU.S

 In the hexcone, saturation is measured along a horizontal axis.


 Value is along a vertical axis through the center of the hexcone.
 Hue is represented as an angle about the vertical axis, ranging from 0" at red
through 360".
 Vertices of the hexcone are separated by 60" intervals.
 Yellow is at 600, green at 1200, and cyan opposite red at H = 1800.
 Complementary colors are 1800 apart.

PREPARED BY S.PRABHU AP/CSE KVCET

136
PRABHU.S

ANIMATION
 Computer animation generally refers to any time sequence of visual changes in a
scene.
 In addition to changing object position with translations or rotations, a computergenerated animation could display time variations in object size, color,
transparency, or surface texture.
 Computer animations can also be generated by changing camera parameters, such
as position, orientation, and focal length.
 And we can produce computer animations by changing lighting effects or other
parameters and procedures associated with illumination and rendering.

DESIGN OF ANIMATION SEQUENCES


 In general, an animation sequence is designed with the following steps:
1. Storyboard layout
2. Object definitions
3. Key-frame specifications
4. Generation of in-between frames

Storyboard Layout
 The storyboard is an outline of the action.
 It defines the motion sequence as a set of basic events that are to take place.
 Depending on the type of animation to be produced, the storyboard could consist
of a set of rough sketches or it could be a list of the basic ideas for the motion.

PREPARED BY S.PRABHU AP/CSE KVCET

137
PRABHU.S

Object Definition
 An object definition is given for each participant in the action.
 Objects can be defined in terms of basic shapes, such as polygons or splines.
 In addition, the associated movements for each object are specified along with the
shape.

Keyframe
 A keyframe is a detailed drawing of the scene at a certain time in the animation
sequence.
 Within each key frame, each object is positioned according to the time for that
frame.
 Some key frames are chosen at extreme positions in the action.
 Others are spaced so that the time interval between key frames is not too great.
 More key frames are specified for intricate motions than for simple, slowly
varying motions.

Generation of in-between frames


 In-betweens are the intermediate frames between the key frames.
 The number of in-betweens needed is determined by the media to be used to
display the animation.
 Film requires 24 frames per second, and graphics terminals are refreshed at the rate
of 30 to 60 frames per second.
 Typically, time intervals for the motion are set up so that there are from three to
five in-betweens for each pair of key frames.

PREPARED BY S.PRABHU AP/CSE KVCET

138
PRABHU.S

RASTER ANIMATIONS
 On raster systems, we can generate real-time animation in limited applications
using raster operations.
 Two dimensional rotations in multiples of 90" are also simple to perform,
although we can rotate rectangular blocks of pixels through arbitrary angles using
antialiasing procedures.
 To rotate a block of pixels, we need to determine the percent of area coverage for
those pixels that overlap the rotated block.
 Sequences of raster operations can be executed to produce real-time animation of
either two-dimensional or three-dimensional objects, as long as we restrict the
animation to motions in the projection plane.
 Then no viewing or visible surface algorithms need be invoked.
 We can also animate objects along two-dimensional motion paths using the
color -table transformations.
 Here we predefine the object at successive positions along the motion path, and set
the successive blocks of pixel values to color-table entries.
 We set the pixels at the first position of the object to "on" values, and we set the
pixels at the other object positions to the background color.
 The animation is then accomplished by changing the color-table values so that the
object is "on" at successively positions along the animation path as the preceding
position is set to the background intensity (Fig).

PREPARED BY S.PRABHU AP/CSE KVCET

139
PRABHU.S

KEY-FRAME SYSTEMS
 We generate each set of in-betweens from the specification of two (or more) key
frames.
 Motion paths can be given with a kinematic dm-ripti011 as a set of spline curves,
or the motions can be physically based by specifying the force acting on the
objects to be animated.
 For complex scenes, we can separate the frames into individual components or
objects called cels (celluloid transparencies), an acronym from cartoon animation.
 Given the animation paths, we can interpolate the positions of individual objects
between any two times.
 With complex object transformations, the shapes of objects may change over time.
 Examples are clothes, facial features, magnified detail, evolving shapes, exploding
or disintegrating objects, and transforming one object into another object.
 If all surfaces are described with polygon meshes, then the number of edges per
polygon can change from one frame to the next.
 Thus, the total number of line segments can be different in different frames.

MORPHING
 Transformation of object shapes from one form to another is called morphing,
 Which is a shortened form of metamorphosis.
 Morphing methods can he applied to any motion or transition involving a change
in shape.
 Given two key frames for an object transformation, we first adjust the object
specification in one of the frames so that the number of polygon edges (or the
number of vertices) is the same for the two frames.

PREPARED BY S.PRABHU AP/CSE KVCET

140
PRABHU.S

 This preprocessing step is illustrated in Fig.

 A straight-line segment in key frame k 15 transformed into


 Two line segments in kev frame k t 1. Since key frame k + 1 has an extra vertex,
 n'e add n veytex bctr\.rtw \wtices 1 and 2 in kcv frame k to balance the number of
 vertices (and edges) In the two key frames. Using linear interpolation to generate
 the in-betweens. wc trmsition the added vertex in key frclme k into vertex 3'
 along the straight-linv path shown in Fis. 16-7. An eianlple ol a triangle linearly
 cxp"11ding into ,I quad~.~lateral is given In Fig. 16-8. Figures 16-9 and 16-10
show
 examples uf morphing 111 television advertising

PREPARED BY S.PRABHU AP/CSE KVCET

141
PRABHU.S

OPENGL
(Open Graphics Library)

OpenGL is the precccmier environment for developing portable, interacting 2D


and 3D Graphics applications.

Advantages:
 OPENGL is truly open, vendor, neutral, multiplatform graphics standard.
 Stable.
 Reliable and portable
 Scalable
 Easy to use.
 Well documented.

PREPARED BY S.PRABHU AP/CSE KVCET

142
PRABHU.S

Features:
 It supports 3D transformation.
 It supports different color model
 It supports lighting (flat shading, Gouraud shading, Pong shading)
 It supports rendering.
 It supports different modeling
 It supports other special effects (atmosphere form, -blending, motion blur)

OPNGL OPERATION:

GLUT: (OpenGL utility took kit);


 It is a window system independent to build for writing OPENGL programs.
 It implements a simple windowing application programming interface. [API]

for OPENGL.
 GLUT provides a portable API as one can write a single OPENGL program that

works across all PC and workstation OS platforms.

PREPARED BY S.PRABHU AP/CSE KVCET

143
PRABHU.S

Sample Program:
Void main (int argc, char ** argc)
{
glutInit (&argc, argv);
glutInitDisplay Mode (Glut-Single Glut-RGB);
glutInitWindowsize (640, 480);
glutInitWindowPosition (100, 150);
glutCreateWindow (my first attempt);

glutDisplayFunc (myDisplay);
glutReshapeFunc (myReshape);
glutMouseFunc (myMouse);
glutKeyboardFunc (myKeyboard);

myInit ();
glut mainloop ();
}

 glutInit (&argc, argv):


 It initializes the OPENGL Utility Toolkit. Its arguments are the standard

ones for parsing information about the command line.

 glutInitDisplay Mode (Glut-Single Glut-RGB):


 This function specifies how the display should be initialized.
 The argument indicates a single displayed buffer with RGB color model.

PREPARED BY S.PRABHU AP/CSE KVCET

144
PRABHU.S

 glutInitWindowsize (640, 480):


 This function specifies that this screen window should initially be 640

pixels wide by 480 pixels height.

 glutInitWindowPosition (100, 150):


 This function specifies that the windows upper left corner should be

positioned on the screen 100 pixels over from the left edge and 150
pixels down from the top.

 glutCreateWindow (my first attempt):


 This function actually opens and displays the screen window, putting

the title my first attempt.

 glutDisplayFunc (myDisplay):
 Whenever the system determines that a window should be redrawn on

the screen.

 glutReshapeFunc (myReshape):
 Screen windows can be reshaped by the user, usually by dragging a

corner of the window to a new position with the mouse.

 glutMouseFunc (myMouse):
 When one of the mouse button is pressed or released a mouse event as

issued.
 My mouse is registered as a function to be called when a mouse event

occurs.

PREPARED BY S.PRABHU AP/CSE KVCET

145
PRABHU.S

 glutKeyboardFunc (myKeyboard):
 This command register the function myKeyboard(): with a event of

pressing or releasing some key on the keyboard.

BASIC GRAPHICS PRIMITIVES


 OPENGL provide tools for drawing all of the output primitives such as points,

lines, polygons, and polylines.


 They are defined by one or more vertices.
 To draw objects in OPENGL, you pass it a list of vertices.
 The list occurs between the two OPENGL function calls glBegin () and glEnd ().
glBegin (GL_POINTS);
glVertex 2i (100, 50);
glVertex 2i (100, 130);
glVertex 2i (150, 130);
glEnd ();

Format of OPENGL commands:

PREPARED BY S.PRABHU AP/CSE KVCET

146
PRABHU.S

OPENGL Data Types:


Suffix

Datatype

Typical

OPENGL TypeName

Char++Type

8 bit integer

Signed char

GLbyte

16 bit integer

short

GLshort

32 bit integer

int or long

Glint, GLsize i

32 bit floating point

float

64 bit floating point

double

Gldouble, GLclamped

Ub

8 bit unsigned number

unsigned char

GLUbyte, GLUdean

Us

16 bit unsigned number

unsigned short

GLUshort

Ui

32 bit unsigned number

unsigned int or

GLUnit, GLenum,

unsigned long

GLbitfield.

GLfloat, GLclamp F

 The size of a point can be set with glpointsize() which takes one floating point

argument. The color of a drawing can be specified using


Glcolor3f(red,green,blue);

 Where the values of red, green and blue vary between 0.0 and 1.0.
 To draw a line between (40, 100) and (202, 9) we use

glBegin (GL_lines);
givertex 2i (40, 100);
glvertex 2i (202, 96);
glEnd ();

 Polyline is a collection of line segment joining end to end.


 In OPENGL, a polyline is called a line strip and is drawn by specifying the

vertices in turn, between glBegin (GL_LINE_STRIP) and glEnd ().

PREPARED BY S.PRABHU AP/CSE KVCET

147
PRABHU.S

glBegin (GL_LINE_STRIP)
glvertex 2i (20, 10)

glvertex 2i (50, 10)


glvertex 2i (20, 80)
glvertex 2i (50, 80)
glEnd ();
glFlush ();

 Line can be drawn using moveto () and lineto () also.


 To drawn and aligned rectangle:

glRecti(Glint x1,Glint y1, Glint x2, Glint y2);

Other Graphics Primitives in OPENGL:

PREPARED BY S.PRABHU AP/CSE KVCET

148
PRABHU.S

GL_TRIANGLES: Takes the listed vertices three at a time and draws a separate
triangle for each.

GL_QUADS: Takes the vertices four at a time and draws a separate quadrilateral for
each.

GL_TRIANGLES_STRIP: Draws a series of triangles based as tripe of vertices: V0,

V1, V2 then V2, V1, V3 then V2, V3, V4


GL_TRIANGLES_FAN: Draws a series of connected triangles based on tripes of
vertices: V0, V1, V2 then V0, V2, V3 then V0, V3, V4
GL_QUAD_STRIP: Draws a series of quadrilaterals based on four somes of vertices:
first V0, V1, V3, V2 then V2, V3, V4, V5 then V4, V5, V7, V6, etc.

Example:
 The following code fragment specifies a 3D polygon to be drawn, in this case a

simple square.
 Note that in this case the same square could have been drawn using the

GL_QUADS and GL_QUAD_STRIP primitives.

GLfloat p1[3] = {0, 0, 1};


GLfloat p2[3] = {1, 0, 1};
GLfloat p3[3] = {1, 1, 1};
GLfloat p4[3] = {0, 1, 1};

PREPARED BY S.PRABHU AP/CSE KVCET

149
PRABHU.S

glBegin (GL_POLYGON);
glvertex3fv (p1);
glvertex3fv (p2);
glvertex3fv (p3);
glvertex3fv (p4);
glEnd ();

DRAWING 3D SCENES WITH OPENGL

Viewing process & Graphics Pipeline


 All of our 2D drawing so far has actually used a special case of 3D viewing, based
on a simple parallel projection:.
 We have been use the camera as shown below,

 The eye that is viewing the scene looks along the z-axis at the window

PREPARED BY S.PRABHU AP/CSE KVCET

150
PRABHU.S

 The view volume of the camera is a rectangular parallel piped.


 Whose 4 side walls are determined by the border of the window.
 Other two walls are determined by a near plane and far plane.
 Points lying inside the view volume are projected onto the window along lines
parallel to the z-axis.
 Ignore the 2 component of those points, so that the 3D point (x1,y1,z1) projected to
(x1,y1,0).
 Points lying outside the view volume are clipped off.
 A separate view port transformation maps the projected points from the window to
viewport on display device.
 Following figure show a camera immersed in a scene.

 The scene consist of a block


 The image produced by the camera is also shown.
 The graphics pipeline implemented by OpenGL does its major work through
matrix transformations
 The important three matrices are
i.

Model view matrix

ii.

Projection matrix

iii.

Viewport matrix

PREPARED BY S.PRABHU AP/CSE KVCET

151
PRABHU.S

Model View matrix:

 It basically provides what we have calling the CT,


 It combines two effects.
sequence of modelling transformations applied to objects
transformation that orients and positions the camera in space.

 The model view matrix is a single matrix in the actual pipeline.


 The modelling matrix is applied and then the viewing matrix.
 So the model matrix is in fact the produced VM.
VM = Viewing matrix.
M = modelling matrix.

Projection matrix:
 It scales and shifts each vertex n a particular way.
 So that all those vertices that inside the view volume will inside a standard cube.
 The projection matrix effectively squashes the view volume into the cube centred
at the origin.
 The projection matrix also reverse the sense of the z-axis.
 So that increasing values of z, increasing values of depth of a point from the eye.
 The following figure shows how the block is transformed into a different block.

PREPARED BY S.PRABHU AP/CSE KVCET

152
PRABHU.S

 Clipping is now performed, which eliminates the portion of the block that lies
outside the standard cube.

Viewport matrix:

 Finally viewport matrix maps the surviving portion of the block into a 3D
viewport.

 This matrix maps the standard cube into a block shape.


 Whose x & y values extend across the viewport.
 Whose 2 component extends from 0 to 1
 That can be described in following figure.

PREPARED BY S.PRABHU AP/CSE KVCET

153
PRABHU.S

DRAWING THREE DIMENSIONAL OBJECTS


1. 3D Viewing Pipeline

 The world co-ordinate selected for display is called a window.


 An area on a display device to which a window is mapped is called viewport.
 The window defines what is to be viewed; the viewport defines where it is to be
displayed.
 Viewports are typically defined within the unit square.
 This provides a means for separating the viewing and other transformations from
specific output-device requirements.
 So that the graphics packages is largely device-independent.
OPENGL functions for setting up transformations
Modeling Transformation (Model View Matrix)

glTranslatef ()
glRotatef ()
glScalef ()

PREPARED BY S.PRABHU AP/CSE KVCET

154
PRABHU.S

Viewing Transformation (Model View Matrix)

gluLookAt ()

Projection Transformation (Model View Matrix)

glFrustum ()
gluPerspective ()
glortho ()
gluOrtho2D ()

Viewing Transformation

glViewport ()

To apply transformations in 2D case, OPENGL uses:

 glscaled (sx, sy, 1:0):

Postmultiply CT by a matrix that performs a scaling by sx in x and by


sy in y;

put the result back into CT (current transformation). No scaling in z


in done.

 glTranslated(dx, dy, 0):

Postmultiply CT by a matrix that performs a translation by dx in x


and by dy in y;

put the result back into CT (current transformation). No translation in


z in done.

 glRotated (angle, 0, 0, 1):

Postmultiply CT by a matrix that performs a rotation through angle


degrees about the z-axis. Put the result back into CT.

PREPARED BY S.PRABHU AP/CSE KVCET

155
PRABHU.S

To initialize the CT to the identity transformation OPENGL provides


glLoadIdentity().

3D Viewing Model View Matrix

Code:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// Viewing Transform
gluLookAt (eyex, eyez, lookAt x, lookAt y, lookAt z, up X, up Y, up Z);
// Model Trnasform
glTrnaslatef (del x, del y, del z);
glRotatef (angle, i, j, k);
glScalef (mult x, mult y, mult z);

PREPARED BY S.PRABHU AP/CSE KVCET

156
PRABHU.S

UNIT IV

INTRODUCTION TO SHADING MODEL


 A shading model dictates how light is scattered or reflected from a surface.
 A shading model frequently used in graphics in two types of light sources.
Point light sources
Ambient light
 These light sources shine on the various surfaces of the objects.
 The incident light interact with the surface in three different ways.
Some is absorbed by the surface and converted into heat.
Some is reflected from the surface.
Some is transmitted into the interior of the object, as in the case of piece of the
glass.

Black body
 If all the incident light is absorbed, the object appears black and is known as black
body.
 We focus on the part of the light that is reflected or scattered form the surface.
 Some amount of these reflected light travels and reach the eyes, causing the object
to be seen.
 There are two types of reflection of incident light.
Diffuse scattering
Specular reflection

PREPARED BY S.PRABHU AP/CSE KVCET

157
PRABHU.S

DIFFUSE SCATTERING
 It occurs when some of the incident light penetrates the surface slightly and is reradiated uniformly in all directions.
 Scattered light interacts strongly with the surface, so its color is usually affected by
the nature of the material out of which the surface is made.

Computing the diffuse component


 Suppose that light falls from a point source onto one side of the facet.
 A fraction of light is reradiated diffusely in all directions from that side.
 Some fraction of the reradiated part reaches the eye, with the intensity denoted by Id.
 The following figure(a) shows the cross section, a point source illuminating a facet S.
 In figure (b), the facet is turned partially away from the light source through an
angle .

Lamberts Law
 The area subtended is now only the fraction cos ().
 So the brightness of S is reduced by that same fraction.

PREPARED BY S.PRABHU AP/CSE KVCET

158
PRABHU.S

 This relationship between brightness and surface orientation is often called


Lamberts law.
 For the internsity of the diffuse component, we can adopt the expression.
Id = Ispd * s.m/ |s||m|
 Where Is is the intensity of the light source.
 pd is the diffuse reflection coefficient
 the following figure shows how the spheres appears when it reflects diffuse light,
for six different reflection coefficient.

SPECULAR REFLECTION
 Real objects do not scatter objects uniformly in all directions.
 So specular component is added to the shading model.
 Specular reflection causes highlights, which can add significantly to the realism of
a picture when objects are shiny.

PREPARED BY S.PRABHU AP/CSE KVCET

159
PRABHU.S

FLAT SHADING
 When a face is flat (like the roof of a barn) and the light sources are quite distant.
 The diffuse light component varies little over different points on the roof.
 In such cases it is reasonable to use the same color for every pixel covered by the
face.
 Flat shading is established in OpenGL by using the command.
glShadeModel(GL_FLAT);
 The following figure shows a buckyball and sphere rendered by means of flat
shading.

 The individual faces are clearly visible on both objects.


 Edges between faces actually appear more pronounced that they would be on an
actual physical object, due to the phenomenon in the eye known as lateral
inhibition.
 Specular highlights are rendered poorly with flat shading.

PREPARED BY S.PRABHU AP/CSE KVCET

160
PRABHU.S

 Because an entire face is filled with a color that was computed at only one vertex.

SMOOTH SHADING
 Smooth shading attempts to de-emphasize edges between faces by computing
colors at more points on each face.
 The two principle types of smooth shadings are
1. Gouraud Shading
2. Phong Shading
 OpenGL does only Gouraud shading.
GOURAUD SHADING
 Computationally speaking, Gouraud shading is modestly more expensive than flat
shading.
 Gouraud shading is established in OpenGL with the use of the function.

o glShadeModel(GL_SMOOTH);
 The following figure shows a buckyball and a sphere rendered by means of
Gouraud shading.

 The buckyball looks the same as when it was rendered with flat shading.

PREPARED BY S.PRABHU AP/CSE KVCET

161
PRABHU.S

 Because the same color is associated with each vertex of a face, so interpolation
changes nothing.

 But the sphere looks much smoother.


 The edges of the faces are replaced by a smoothly varying color across the object.
 The following figure suggests how Gouraud Shading reveals the underlying
surface approximated by the mesh.

 The polygonal surfaces shown in coss section, with vertices V1, V2 etc.
 The imaginary smooth surface is suggested as well
 Properly computed vertex normals m1, m2 etc, perpendicular to this imaginary
surface, so that normals of correct shading will be used.

 The color is then made to vary smoothly between vertices.


 Gouraud shading does not picture highlights well.
 Highlights are better reproduced by using Phong shading.

PREPARED BY S.PRABHU AP/CSE KVCET

162
PRABHU.S

PHONG SHADING

 Greater realism can be achieved.


 Particularly with regard to highlights on shiny objects.
 This is done by approximations of the normal vector to the face at each pixel.
 This type of shading is called Phong Shading.
 When computing Phong Shading, we find the normal vector at each point on the
face of the objects.

 And we apply the shading model there to find the color.


 We compute the normal vector at each pixel by interpolating the normal vectors at
the vertices of the polygon.

 Following figure shows a projected face, with normal vectors m1, m2, m3 and m4
indicated at the four vertices.

 For the scanline 1/s the vector mleft and mright are found by linear interpolation.
 For instance

PREPARED BY S.PRABHU AP/CSE KVCET

163
PRABHU.S

 This interpolated vector must be normalized to unit length before it is used in the
shading formula
 Once mleft and mright are known, they are interpolated to form a normal vector at
each x along the scan line.
 Following fig. Shows an object rendered by using Gouraud Shading and the same
object is rendered by using Phong Shading.

 In Phong Shading, the direction of normal vector varies smoothly from point to
point and more closely approximates.

 The production of specular highlights is much more faithful than with Gouraud
Shading.

 It produces more realistic rendering.


Drawback:

 Phong Shading is relatively slow speed.


 More computation is required per pixel.
 Phong shading can take six to eight times longer than Gouraud Shading.

PREPARED BY S.PRABHU AP/CSE KVCET

164
PRABHU.S

Why OpenGL is not setup to do Phong Shading?

 Because it applied the shading model once per vertex right after the modelview
transformation

 Normal vector information is not passed to the rendering stage following the
perspective transformation and division.

ADDING TEXTURE TO FACES:


 The realism of an image is greatly enhanced by adding surface texture to the

various faces of a mesh object.

 Basic function is

texture (s, t)
 This function produces a color or intensity value for each value of s and t

between 0 and 1.

PREPARED BY S.PRABHU AP/CSE KVCET

165
PRABHU.S

TYPES:
 There are numerous sources of textures.
 The most common textures are
Bitmap textures
Procedural texture

BITMAP TEXTURES:
 Textures are often formed from bit representation of images such as digitized

photo, clip art or image can be previously in some program.

TEXELS:
 Texels formed from bitmap consists of an array, say textr[c][r] of color values

often called Texels.


 If the array has c columns and r rows, the indices c and r varies from 0 to c-1

and 0 to r-1respectively.

PROCEDURAL TEXTURE:
 Alternatively we can define a texture by mathematical function or procedure

example, the following spherical shape

 It can be generated by the function

PREPARED BY S.PRABHU AP/CSE KVCET

166
PRABHU.S

float fakeshape (float s, float t)


{
float r = sqrt((s-0.5) * (s-0.5) + (t-0.5) * (t-0.5))
if (r<0.3)
return 1-4/0.3

//sphere intensity

else
return 0.2

//dark background

PASTING THE TEXTURE ON TO A FLAT SURFACE:


 Since texture space itself is flat, it is simplest to paste texture on to a flat

surface.
 Example,

To define a quadrilateral phase and to position a texture on it,


four texture co-ordinates and 4-3D points are passed to OPENGL function.

glBegin (GL_QUADS)
glTexCoord2f (0.0, 0.0);
glvertex3f (1.0, 2.5, 1.5);
glTexCoord2f (0.0, 0.6); glvertex3f (1.0, 3.7, 1.5);
glTexCoord2f (0.8, 0.6); glvertex3f (2.0, 3.7, 1.5);
glTexCoord2f (0.8, 0.0); glvertex3f (2.0, 2.5, 1.5);
glEnd

PREPARED BY S.PRABHU AP/CSE KVCET

167
PRABHU.S

MAPPING A SQUARE TO RECTANGLE:

 The above figure shows the common case in which the four corners of the

texture square are associated with 4-corners of a rectangle.


 Producing repeated structure

PREPARED BY S.PRABHU AP/CSE KVCET

168
PRABHU.S

 The above figure shows the use of texture co-ordinates that tile the texture,

making it repeat.

ADDING SHADOWS OF OBJECTS:


 Shadows make an image much more realistic, it shows how the objects are

positioned with respect to each other.


 Following figure shows cube and sphere with and without shadows.

PREPARED BY S.PRABHU AP/CSE KVCET

169
PRABHU.S

 Shadows are absent in figure A, so it is impossible to see how far above the

plane, the cube and the sphere are floating.


 Shadow in figure B, give useful hints as the positions of the objects.
 Generally a shadow conveys lot of information.

PREPARED BY S.PRABHU AP/CSE KVCET

170
PRABHU.S

 To compute the shape of the shadow is cost.


 In the above figure, the shape of the shadow is determined by the projections

of each of the phases of the box on the plane of the floor.


 This provides the key for drawing the shadows.

SHADOW BUFFER:
 Different methods for drawing shadow uses a variant of the depth buffer, that

performs the removal of hidden surfaces.


 In this method, an auxiliary second depth buffer called shadow buffer, is

employed for each light sources. This recovers lot of memory.


 The rendering of shadow is done by two stages.

i.

Loading the buffer.

ii.

Rendering the scene.

BUILDING A CAMERA IN A PROGRAM

 In order to have fine control ever camera movements, we create and manipulate
our own camera in a program.

 We create a camera class that knows how to do all the things a camera does.
 Doing this is very simple and the payoff is high.
 In a program, we create a camera object called, say cam and adjust it with
functions as following,
cam.set(eye,look,up);

//initialize the camera

cam.slide(-1,0,-2);

//slide the camera forward and to the left

cam.roll(30);

//roll it through 300.

cam.yaw(20);

//yaw it through 200.

PREPARED BY S.PRABHU AP/CSE KVCET

171
PRABHU.S

.
etc
.
.

 The following program shows the basic definition of the camera class.
Class camera
{
private:
point3 eye;
vector3 u,v,n;
double viewAngle, aspect, nearDist, farDist;
void setModeViewMatrix();
public:
camera();
void set(point3 eye, point3 look, vecotr3 up);
void roll(float angle);
void pitch(float angle);
void yaw(float angle);
void slide(float delu, float delv, float deln);
void setshape(float vAng, float asp, float nearD, float farD);
};

 Here point3 and vecotr3 are the basic data types.


 The utility routine setModelViewMatrix() communicates the model view matrix to
OpenGL

PREPARED BY S.PRABHU AP/CSE KVCET

172
PRABHU.S

 It is used only by member functions of the class and needs to be called after each
change is made to the cameras position.

 The following program shows the possible implementation of this routine.


Void camera :: setModelViewMatrix(void)
{
float m[16];
vector3 eVec(eye.x, eye.y, eye.z);
m[0]=u.x; m[4]=u.y; m[8]=u.2; m[12]=-eVec.dot(u);
m[1]=v.x; m[5]=v.y; m[9]=v.2;

m[13]=-eVec.dot(v);

m[2]=n.x; m[6]=n.y; m[10]=n.2; m[14]=-eVec.dot(v);


m[3]= 0;

m[7]= 0; m[11]= 0;

m[15]= 1.0;

glMatrixMode(GL_MODELVIEW);
glLoadMatrix(m);

i.

It can slid in three dimensions

ii.

It can be rotated about any of three co-ordinate axes.

Sliding the Camera:

 Sliding a camera means to move it along of its own axes.


 That is, in the u, v or n direction, without rotating it.
movement along n :

forward or backward

movement along u :

left or right

movement along v :

up or down

PREPARED BY S.PRABHU AP/CSE KVCET

173
PRABHU.S

Rotating the Camera:

 We want to roll, pitch or yaw the camera.


 Each of these involves a rotation of the camera about one of its own axes.
 To roll the camera is to rotate it about its won n-axis.
 This means that both the directions u and v must be rotated, as shown in figure.

 We form two new axes u' and v' that lie in the same plane as u and v.
 The functions pictch() and yaw() are implemented in a similar fashion.

CREATING SHADED OBJECTS


 Shading is a process used in drawing for depicting levels of darkness on paper by
applying media more densely or with a darker shade for darker areas, and less
densely or with a lighter shade for lighter areas.

PREPARED BY S.PRABHU AP/CSE KVCET

174
PRABHU.S

 There are various techniques of shading including cross hatching where


perpendicular lines of varying closeness are drawn in a grid pattern to shade an
area.
 The closer the lines are together, the darker the area appears.

 Likewise, the farther apart the lines are, the lighter the area appears.
 Light patterns, such as objects having light and shaded areas, help when creating
the illusion of depth on paper.

PREPARED BY S.PRABHU AP/CSE KVCET

175
PRABHU.S

 Fly a camera through space looking at various polygonal mesh objects.


 It Include ambient diffuse and specular light components.
 Provide a keystroke that switches flat and smooth shading.

SHADING METHODS

1. Circulism:

This is a very popular shading method among artists.


The idea is to draw very tiny circles that overlap and intertwine.
Building up tone can be tedious but the results are worth it.
This shading method is great for rendering a realistic skin texture.
Use a light touch and build up tone.
2. Blended circulism:

Graphite is scribbled onto the paper just as in the last method.


Using a blending stump, the graphite is blended in small circular motions.
This shading method is also great for skin textures.
3. Dark Blacks:

If a user wants dark blacks, try using charcoal.


For a dark tone with apply the graphite to the paper.
Any time user dealing with dark tones and graphite, there will be a shine
that results.

PREPARED BY S.PRABHU AP/CSE KVCET

176
PRABHU.S

This happens because the tooth of the paper absorbs the graphite quickly
and there are extra layers left on top. Glare/shine is a reality when working
with graphite.

4. Loose Cross Hatching:

It is simple and effective.


It is very looking too.
The basic idea of crosshatching is to overlap lines.
Start by drawing a set of diagonal lines next to each other.
Then rotate the drawing 90 degrees and draw another set of diagonal lines
that overlap the first set.

This can be repeated numerous times to build up tone. Crosshatching can be


as tight or as loose.

5. Tight Cross Hatching:

Using the ideas from loose crosshatching, this shading method takes it a
little further.

Tone is built up through repetition and a soft touch.


This shading methods works really well for animal.
Its not perfect and some paper tooth will show through.
6. Powder Shading:

Powder shading is a sketching shading method.


In this style, the stumping powder and paper stumps are used to draw a
picture.

The stumping powder is smooth and doesnt have any shiny particles.

PREPARED BY S.PRABHU AP/CSE KVCET

177
PRABHU.S

The poster created with powder shading looks more beautiful than the
original. The paper to be used should have small grains on it so that the
powder remains on the paper.

RENDERING THE TEXTURE:

 Rendering in a face F is similar to Gauraud shading proceeds across the face

fixed by pixel.
 For each pixel it must determine the corresponding texture-coordinates (s, t)

and set the pixel to the proper color.


 Following figure shows the camera taking a snapshot of a face F with texture

pasted on it and the rendering in progress.

 The scan line y is being filled from xleft xright.

PREPARED BY S.PRABHU AP/CSE KVCET

178
PRABHU.S

 For each x, along this scan line it must compute the correct position on the
face (p(xs, ys)) from that obtain the correct position (s*, t*) within the texture.
 The following diagram show that incremental calculation texture co-ordinates.
co

DRAWING SHADOWS
 Make one of the objects in the scene a flat planar surface, on which is seen
shadows of other objects.

PREPARED BY S.PRABHU AP/CSE KVCET

179
PRABHU.S

 A simple way of drawing a drop shadow of a rectangular object is to draw a gray


or black area underneath and offset from the object.
 In general, a drop shadow is a copy in black or gray of the object, drawn in a
slightly different position. Realism may be increased by:

i.

Darkening the colors of the pixels where the shadow casts instead of
making them gray. This can be done with alpha blending the shadow with
the area it is cast on.

ii.

Softening the edges of the shadow. This can be done by adding Gaussian
blur to the shadows alpha channel before blending.

 Shadows are one of the most important visual cues that we have for understanding
the spatial relationships between objects.
 Unfortunately, even modern computer graphics technology has a difficult time
drawing realistic shadows at an interactive frame rate.
 One trick that you can use is to pre-render shadows and then apply them to the
scene as a textured polygon.
 This allows the creation of soft shadows and allows the computer to maintain a
high frame rate while drawing shadows.
Step 1: Activate and position the shadows

First, activate the shadows and position them using Sketch Ups Shadows toolbar.
Step 2: Draw the Shadows Only

Next, to render the shadows without the geometry.


To do this, create two pages in Sketch Up.
Put the objects in the scene in a different layer than Layer0.

PREPARED BY S.PRABHU AP/CSE KVCET

180
PRABHU.S

So that toggles the visibility of the layer containing the objects.


Have the first page draw the shadows and show the layer containing the objects
in the scene.

Have the second page hide the layer containing the objects in the scene.
When the user moves from the first page to the second page, the objects will
disappear, leaving the shadows only.

Step 3: Draw the Shadows from Above

Next, position the camera to view the shadows from directly above so that we can
use the resulting image to draw the shadows onto a ground plane polygon.

Step 4: Soften the Shadows

The shadows that are rendered by Sketch Up always have hard edges.
In order to make the shadows look more realistic, we can soften the shadows
using software such as Photoshop or Gimp that includes an image blur tool.

When you create the shadow image, you can use the alpha channel of the
image to make portions of the image transparent.

Step 5: Create a new Shadow Material

Next is to create a new material that uses the soft shadow image from the
previous step as a texture.

If the image that we created in the previous step has an alpha channel, then the
alpha channel will be used to carve out transparent areas in the shadow
material.

PREPARED BY S.PRABHU AP/CSE KVCET

181
PRABHU.S

Step 6: Apply the material to a ground polygon

Last, create a ground polygon that underlies the objects in the scene and apply
the shadow material to it.

This will create a semi transparent polygon where dark patches are the shadow
areas. Since the shadows are pre-computed, you should turn off the Shadow
option in Sketch up.

In computer graphics, shading refers to the process of altering a color based on


its lights and its distance from lights to create a photorealistic effect.

Shading is performed during rendering process.

Shadow Mapping:
 Shadow mapping is just one of many different ways of producing shadows in our
graphics applications.
 Shadow mapping is an image space technique, working automatically with objects
created.

Advantages:
No knowledge or processing of the scene geometry is required.
Only a single texture is required to hold shadowing information for each light.
Avoids the high fill requirement of shadow volumes.

Disadvantages:
Aliasing, especially when using small shadow maps.
The scene geometry must be rendered once per light in order to generate the
shadow map for a spotlight.

PREPARED BY S.PRABHU AP/CSE KVCET

182
PRABHU.S

UNIT V
FRACTALS & SELF SIMILARITY

Fractal:

 is a rough or fragmented geometric shape that can be split into parts.


 Each of which is a reduced copy of whole.
 Such a property is called Self Similarity.
Self Similarity:
 A self similar object is exactly or approximately similar to a part of itself.
 Self Similarity is a typical property of fractals.
 Computers are particularly good at repetition.
 They will do something again and again without complaint.
 Recursion often makes a difficult geometric task extremely simple.
 Among other things, it lets one decompose o refine shapes into ever smaller ones,
conceptually ad infinitum.

[ ad infinitum = infinity; continue forever, without limit ]

Self- Similar Curves:


 Many of the curves and pictures has the property called Self-Similar.
 Some curves are exactly self similar
 And some curves are Statistically Self Similar
Exactly Self Similar:
 If a region is enlarged, the enlargement looks exactly like the original.
Statistically Self Similar:
 The wiggles and irregularities in the curve are the same on the average.

PREPARED BY S.PRABHU AP/CSE KVCET

183
PRABHU.S

Example:

 Nature provides examples that mimic statistical self similarity.


 The classic example is coast line.
Mandelbrot:

 The mathematician Benit Mandelbrot brought together and popularized investigations


into the nature and self-similarity.

 He called various forms of self-similar curves as fractals


 A line is one dimensional and plane is two dimensional.
 But there are creatures in between them.
 We shall define curves that are infinite in length yet lie inside a finite rectangle.
Stirred: being excited or provoked to the expression of an emotion.

Koch Curve:

 Very complex curves can be furnished recursively by repeatedly refining a simple


curve.

 The simplest example perhaps is the Koch curve.


 That is discovered by mathematician Helge Von Koch.
 This curve stirred great interest in the mathematical world because it produces an
infinitely long line within a region of finite area.

 Successive generation of Koch curves are denoted by K0, K1, K2........

TWO GENERATIONS OF THE KOCH CURVE

 The Zeroth generation shape K0 is just a horizontal line of length unity.

PREPARED BY S.PRABHU AP/CSE KVCET

184
PRABHU.S

 The curve K1 is shown in the above figure.


 To create K1, divide the line K0
 And replace the middle section with a triangular bump having sides of length 1/3.
 The total length of the line is evidently 4/3.
 The second order curve K2 is formed by building a bump on each of the four line
segments of K1.

 In this process, each segment is increased in length by a factor of 4/3.


 So the total length of the curve is 4/3 larger than that of the previous generation.
 Thus K1 has total length (4/3)i.
 Which increases as i increases.
 As i tends to infinity, the length of the curve becomes infinite.
Koch Snowflake:

THE FIRST FEW GENERATIONS OF THE KOCH SNOWLAKE


 It is formed out of three Koch Curves joined together.
 The perimeter of the ith generation shape Si is the three times the length of a simple Koch
Curve, and S0 is 3(4/3)i
 The following figure shows third, fourth and fifth generation of Koch snowflake.

PREPARED BY S.PRABHU AP/CSE KVCET

185
PRABHU.S

KOCH SNOFLAKE, S3, S4, AND S5

PEANO CURVES (OR) SPACE-FILLING CURVES


 Peano curves are the fractal like structures that are drawn through a recursive process.
 Some of the curves shown in below, are space-filling curves or peano curves.

PREPARED BY S.PRABHU AP/CSE KVCET

186
PRABHU.S

 Such curves have a fractal dimension of 2.


 They completely fill a region of space.
Example:
 The two most famous peano curves are Hilbert and Sierpeniski curves.
 Some low order Hilbert curves are shown below

CREATING AN IMAGES ITERATED FUNCTIONS

 Another way to approach infinity is to apply a transformation to a picture again


and again and examine what results.

 This technique provides another fascinating way to create fractal shapes.


 This idea has been developed by Bransley, in which an image can be
represented by a handful of numbers.

Experimental Copier

 We take an initial image I0


 That can be put it through a special photocopier that produces a new image
as I1 shown in fig.

PREPARED BY S.PRABHU AP/CSE KVCET

187
PRABHU.S

Making New Copies From Old


 It is not just a simple copy of I0
 Rather it is a superposition of several reduced versions of I0.
 We then take I1 and feed it back into the copier again, to produce image I2.

 We repeat this process forever, obtaining a sequence of images I0, I1, I2--- called
the Orbit of I0.

Sierpinski Copier

 Consider a specific example of a copier that we might call the supercopier or SCopier.

 It superimposes three smaller versions of whatever image is fed into it.

PREPARED BY S.PRABHU AP/CSE KVCET

188
PRABHU.S

 That shows what one pass through S-Copier produces when the input is the
letter F.

 These three are smaller images could just as well overlap.


 The following fig. Shows the first few iterate that the S-Copier produces.

The First part of the orbit of I0 for the S-copier

 The figure suggests that the iterates converge to the Sierpinski triangle.
 At each iteration the individual component FS become one-half as large and they
triple in number.

 As more and more iterations are made, the FS approach dots in size, and these dots
are arranged in Sierpinski triangle.

PREPARED BY S.PRABHU AP/CSE KVCET

189
PRABHU.S

 The final image does not depend on the shape of the F at all, but only the nature of
the Super copier.

How the S-Copier make the images?

 It contains 3 lenses.
 Each of which reduces the input image to one-half its size.
 And move it to a new position.
 These three reduced and shifted images and superposed on the printed output.
 Scaling and shifting are easily done by affine transformations.

MANDELBROT SETS
 The Mandelbrot set is a mathematical set of points, whose boundary generates a
distinctive and easily recognisable two dimensional fractal shape.

 The set is closely related to the Julia Set.


 It generates similarly complex shapes.
 This is named after the mathematician Benoit Mandelbrot.
Iteration Theory

 Julia and Mandelbrot sets arises from a branch of analysis known as iteration
theory (or dynamical systems theory)

 This theory asks what happens when one iterates a function endlessly.
Mandelbrot Sets and Iterated Function Systems

 A view of the Mandelbrot Set is shown in following figure.

PREPARED BY S.PRABHU AP/CSE KVCET

190
PRABHU.S

 It is the black inner portion.


 It appears to consist of a cardioids along with a number of wart like circles glued
to it.

 In actuality, its border is astoundingly complicated.


 Its complexity can be explored by zooming in on a portion of the border and
computing a close-up view.

 In this theory the zooming can be repeated forever.


 The border is infinitely complex
 In fact, it is a fractal curve.
 Each point in the figure is shaded or colored according to the outcome of an
experiment run on an IFS.

 The IFS of internet is shown in fig.

PREPARED BY S.PRABHU AP/CSE KVCET

191
PRABHU.S

The Iterated function system for Julia and Mandelbrot sets

 It causes the particularly simple function


F(z) = Z2 + C
where C is some constant.

 That is the system produces each output by squaring its input and adding C.
 We assume that the process begins with the starting value S.
 So the system generates the sequence of values or Orbits.
d1 = (S)2 + C
d2 = ((S)2 + C)2 + C
d3 = (((S)2 + C)2 + C)2 + C
d4 = ((((S)2 + C)2 + C)2 + C)2 + C

 The orbit depends on two ingredients.


i.

Starting point S

ii.

Given value of C.

PREPARED BY S.PRABHU AP/CSE KVCET

192
PRABHU.S

JULIA SETS

 The Mandelbrot Set and Julia Sets are extremely complicated sets of points in the
complex plane.

 There is a different Julia Set denoted Jc, for each value of C.


 A close related variation is the Filled-in Julia Set denoted by KC.
 Which is easier to define.
 The Filled-in Julia Set KC
 Consider the iterated function system.

 Now we set to C is fixed chosen value.


 And examine what happens for different starting points.
Drawing Filled-in Julia Sets

 Process of drawing a Filled-in Julia Set is almost identical to that for the
Mandelbrot set.

PREPARED BY S.PRABHU AP/CSE KVCET

193
PRABHU.S

 We again choose a window in the complex plane and associate pixels with points
in the window.

 However, pixels correspond to different values of the different points.


 A single value of C is chosen, and then the orbit for each pixel position is
examined to see whether it explodes.

RANDOM FRACTALS
 Fractal shapes are completely deterministic.
 They are completely predictable (even though they are very complicated)
 In graphics the term fractal has become widely associated randomly generated
curves and surfaces that exhibit a degree of self similarity.
 These curves are used to produce Naturalistic shapes for representing objects
such as ragged mountains, grass and fire.

Fractalizing a segment

 The simplest random fractal is formed by recursively roughening or fractaling a lie


segment.

 At each step, each line segment is replaced with a random amount.


C

Random amount

Replace S with this elbow

M
L

PREPARED BY S.PRABHU AP/CSE KVCET

194
PRABHU.S

 The above figure shows this process applied to the line segment S having the end
points A & B.

 S is replaced by two line segments A to C and from C to B.


 For fractal curve, point C is randomly chosen along the perpendicular bisector L of S.
Stages of Fractalization

 There are three stages of fractalization.


First Stage:
 The midpoint of AB is perturbed to form point C.

Note:-

[Perturbed: in mathematical method, that give approximate solutions to

problems that cannot be solved exactly.]

Second Stage:
 Each of the two segments has its midpoint perturbed to form points D and E.

PREPARED BY S.PRABHU AP/CSE KVCET

195
PRABHU.S

Third Stage:
 At final stage, new points F...........I are added.

Calculation of fractalization in a program


C

Random amount

Replace S with this elbow

M
L

PREPARED BY S.PRABHU AP/CSE KVCET

196
PRABHU.S

 Line L passes through the midpoint M of segment S and is perpendicular to it.


 And point C along L has the parametric form
C(t) = M + (B-A) t

Where M = (A+B)/2

 The most fractal curves, t is modelled as a Gaussian random variable with mean
and some standard derivation.

 The following runtime fract() shown below


 That generates curves that approximation of actual fractals.
 This routine recursively replaces each segment in a random elbow.
fract (Point2 A, Point2 B, double stdDev)
{
//generate Fractal curve from A to B
double xDiff = A.x B .x. yDiff = A.y B.y;
Point2 C;
if(xDiff * XDiff + YDiff * yDiff < minLenSq)
cvs.lintTo(B.x, B.y);
else
{
stdDev *= factor;
double t = 0;
for (int i=0; i<12; i++)
t += rand()/32768.0;
t = (t-6) * stdDev;

PREPARED BY S.PRABHU AP/CSE KVCET

197
PRABHU.S

C.x = 0.5 * (A.x + B.x) t * (B.y A.y);


C.y = 0.5 * (A.y + B.y) + t * (B.x A.x);
fract(A, C, stdDev);
fract(C, B, stdDev);
}
}

Drawing a Fractal Curve


double MinLenSq, factor; //global variables
void drawfractal(Point2 A, Point2 B)
{
double beta, stdDev;
factor = pow(2.0,(1.0-beta).2.0);
cvs.moveTo(A);
fract(A, B, stdDev);
}

OVERVIEW OF RAY TRACING

 Ray Tracing is a technique for generating an image by tracing the path of light
through pixels in an image plane.

 And simulating the effect of its encounters with virtual objects.


 This technique is used to produce a very high degree of visual realism.
 Usually higher than the Scanline rendering
 But greater computational cost.

PREPARED BY S.PRABHU AP/CSE KVCET

198
PRABHU.S

Introduction:

 Ray tracing (ray casting) provides related but even more powerful approach to
rendering scenes.

 The following figure gives the basic idea

 A buffer as a simple array of pixels positioned in space, with eye looking it into
the scene.

 The general question is what does the eye see through this pixel?
 A ray of light is arriving at eye through the pixel from some point p in the scene.
 The colour of the pixel is determined by the light emanates along the ray from
point p in the scene.

PREPARED BY S.PRABHU AP/CSE KVCET

199
PRABHU.S

Reverse Process

 In reality the process is revered.


 A ray is case form the eye through the centre of pixel and out into the scene.
 Its path is traced to see what object it hits first and at what point
 This process automatically solves the hidden surface problems.
 The first surface hit by the ray is the closest object to the eye.
 For the description of light source in the scene, the studying model is applied to the
point first hit.

 And components of light are computed.


 The resulting colour is then displayed in the pixel.
Features of Ray Tracing:

 Some of the interesting visual effects are easily incorporated


Shadowing
Reflection
Refraction

 That provides dazzling realism that are difficult to create by any other method.
 It ability to work comfortably with richer geometric primitives such as
Spheres
Cones and
Cylinders.

PREPARED BY S.PRABHU AP/CSE KVCET

200
PRABHU.S

OVERVIEW OF THE RAY-TRACING PROCESS

 The following code segment shows the basic step in ray tracer.
define the objects and light sources in the scene
setup the camera
for (int r = 0; r < nRows; r++)
for (int c = 0; c < nCols; c++)
{

1. Build the rc-th ray


2. Find all intersections of the rc-th ray with objects in the scene.
3. Identify the intersection that lies closest to, and in front of the ye
4. Compute the hit point where the ray hits this object, and the normal
vector at that point.
5. Find the color of the light returning to the eye along the ray from the
point of intersection
6. Place the color in the rc-th pixel.
}

 The scene to be traced by rays through geometric objects and light sources.
 A typical scene may contain
Spheres,
Cones,
Boxes,
Cylinders etc...

 The objects are described in some fashion stored as object in camera

PREPARED BY S.PRABHU AP/CSE KVCET

201
PRABHU.S

Computing hit point:

 When objects have been tested, the object with the smallest is the closest to the
location of the hit point on the objects found.

Computing Color:

 The colour of the light that receiving by object, in the direction of the eye is
computed and stored in the pixel.

 The following figure shows simple scene consisting of some cylinders, spheres and
cones.

 The snow man consists, mainly spheres.


 Two light sources are also shown
Object List:

 Descriptions of all the objects are stored in an object list.


 This is a linked list of descriptive records as shown below.

 The ray that is shown intersects a sphere, cylinder and two cones.
 All the other objects are missed.

PREPARED BY S.PRABHU AP/CSE KVCET

202
PRABHU.S

 The object with the small hit time, a cylinder in the scene is identified.
The hit spot Phit is easily identified by the ray equation.
Phit = eye + dirr,c thit

(hit spot)

INTERSECTING OF A RAY WITH AN OBJECT

 Consider the following code


Scene scn;

//create a scene

Scn.read(myScene.dat);

//read the SDL Scene file.

 Objects in the scene are created and placed in a list.


 Each object is an instance of generic shape such as sphere or cone.
 The following figure shows some o the generic shapes we shall be ray tracing.

Some common generic shapes used in ray tracing

 Implicit form of generic sphere is


F (x,y,z) = x2 + y2 + z2 1

PREPARED BY S.PRABHU AP/CSE KVCET

203
PRABHU.S

 For convenience use the notation F(p)


F (p) = |p|2 1
Generic Cylinder
F (x,y,z) = x2 + y2 1

for 0<z<1

ADDING SURFACE TEXTURE

 Computer generated images can be made much more lively and realistic by
painting textures on various surfaces.

 The following figure shows a ray-traced scene with several examples of textures.
 OpenGL is used to render each face.
 For each face F, a pair of texture co-ordinates is attached to each vertex of face.
 Then openGL painted each pixel inside the face by using the colour of the
corresponding point within a texture image.

 Two principle types of texture can be used.


i.

Solid texture

ii.

Image texture

Solid Texture

 Solid texture is sometimes called 3D texture.


 The object is considered to be carved out of a block of solid material that itself has
texturing.

 The ray tracer reveals the colour of the texture at each point on the surface of the
object.

 The texture is represented by a function texture (x,y,z) that produces an (r,g,b)


colour value at every point in space.

PREPARED BY S.PRABHU AP/CSE KVCET

204
PRABHU.S

Example:

 Imagine a 3D checker bound made up of alternating red and black cubes stacked
up throughout all of space.

 We position one of the cubelets with a vertex (0,0,0) and the size S=(S.x,S.y,S.x)
 All other cubes have his same size (a width of S.x, aheight of S.y etc)
 That are placed adjacent to one another in all three dimensions.
 It is easy to write an expression for such a checkerboard texture.
Jump(x,y,z) = ((int) (A+x/S.x) + (int) (A+y/S.y) + (int) (A+z/S.z))%2

 The following figure shows a generic sphere and generic cube composed of
material with this solid texture

Ray tracing of some objects with checkerboard solid texture

PREPARED BY S.PRABHU AP/CSE KVCET

205
PRABHU.S

 The colour of the material is the colour the texture


 Notice that the sphere and the cube are clearly made up of solid cubelets.

REFLECTIONS & TRANSPARENCY

 One of the great strength of the ray-tracing method is the ease with which it can
handle both reflection and refraction of light.

 This allow to build scenes of exquisite realism, containing


Mirrors
Fishbowls
Lenses etc.,
 There can be multiple reflections in which light bounces off several shiny
surfaces before reaching the eye.

 These processes require the spawning and tracing of additional rays.


 The following figure shows a ray emanating from the eye,
In the direction dir
And hitting a surface at the point Ph.

PREPARED BY S.PRABHU AP/CSE KVCET

206
PRABHU.S

 When the surface is mirror like or transparent (or both) the light I that reaches the
eye may have five components
I = Iamb + Ispec + Irefl + Itran
where
Iamb :
Idiff :
Ispe :
Irefl :
Itran :

ambient component
diffuse component
specular component
reflected light component arising from the light IR.
Transmitted light components, arising from the light IT.

 The first three are Familiar


Ambient
Diffuse
Specular contributions

 The diffuse and specular parts arise light sources in the environment that are at Ph.
 The following figure shows how the number of contributions of light grows at
each contact point.

PREPARED BY S.PRABHU AP/CSE KVCET

207
PRABHU.S

 I is the sum of three components


Reflected component R1
Transmitted component T1
Local component L1.

Local component:

 Is simply the sum of the usual ambient, diffuse and specular reflections at Pn.
 Local components depend only on actual light sources.
 They are not computed on the basis of casting secondary rays.

PREPARED BY S.PRABHU AP/CSE KVCET

208
PRABHU.S

 Figure (b) abstracts the various light components into a tree of light
contributions.

 The transmitted components arriving on the lift branches.


 The reflected components arriving on the right branches.
 At each node a local component must also be added, but for simplicity it is not
shown.

The refraction of light:

 When a ray of light strikes a transparent object, a portion of the ray penetrates the
object, as shown in fig.

 The ray will change direction from dir to t if the speed of light is different in
medium 1 than in medium 2.

 If the angle of incidence of the ray is  1.


 Snells lay states that the angle of refraction  2 will be

PREPARED BY S.PRABHU AP/CSE KVCET

209
PRABHU.S

 Where C1 is speed of light in medium 1.


 C2 is speed of light in medium 2.
 Only the ration C2/C1 is important.
 It is often called the Index of refraction of medium 2 with respect to medium 1.

 If  1 equals zero, light hitting an interface at right angles in not bent.

BOOLEAN OPERATIONS ON OBJECTS

 According to CSG, complex shapes are defined by set operations (also called
Boolean operations) on simpler shapes.

 Objects such as lenses a hallow fishbowls are easily formed by combining the
generic shapes.

 Such objects are variously called compound objects (or) Boolean objects (or) CSG
objects.

 The ray tracing method extends in a very organized way to compound objects.
 It is one of the great strengths of ray tracing that it fits so naturally with CSG
models.

 We look at the examples of three Boolean operators.


Union
Intersection
Difference
 The following shows the compound objects built from spheres.

PREPARED BY S.PRABHU AP/CSE KVCET

210
PRABHU.S

 Fig (a) is a lens shape constructed as the intersection of two spheres.


 That is a point is in the lens if and only if it lies in both spheres.
 Symbolically L is the intersection of the spheres S1 and S2, can be written as

L = S1  S2
 Fig(b) shows a bowl, constructed using the difference operation.
 Applying the difference operation is analogous to removing material to cutting or
carving.
 The bowl is specified by
B = (S1 S2) C

 The solid globe S1 is hollowed out by removing all the points of the inner sphere S2.
 The top is then opened by removing all points in the cone C.

PREPARED BY S.PRABHU AP/CSE KVCET

211
PRABHU.S

UNION OF FOUR PRIMITIVES

 A point is in the union of two sets A and B denoted AB, if it is in A or B or in


both.

 The following fig. Shows a rocket constructed as the union of two cones and two
cylinders.

 That is,

R = C1  C2  C3  C4

 Cone C1 rests on cylinder C2.

 Cone C3 is partially embedded in C2 and rests on the fatter cylinder C4.

PREPARED BY S.PRABHU AP/CSE KVCET

212
PRABHU.S

QUESTION BANK
UNIT I
2D PRIMITIVES

PART A

1.Define Output Primitives


 Graphics programming packages provide functions to describe a scene in terms of
these basic geometric structures, referred to as output primitives.

2.What are Simple geometric components?


 Points and straight line segments are the simplest geometric components of
pictures.

3.What are Additional output primitives?


 That can be used to construct a picture include
 circles and other conic sections,
 quadric surfaces,
 spline curves and surfaces,
 polygon color areas, and
 character strings.
4.Define Random-scan system or Vector System.
 It stores point-plotting instructions in the display list, and coordinate values in
these instructions are converted to deflection voltages that position the electron
beam at the screen locations to be plotted during each refresh cycle.

6.How the straight line is drawn in Analog Display Devices

PREPARED BY S.PRABHU AP/CSE KVCET

213
PRABHU.S

 For analog devices, such as a vector pen plotter or a random-scan display, a


straight line can be drawn smoothly from one endpoint to the other.
 Linearly varying horizontal and vertical deflection voltages are generated that are
proportional to the required changes in the x and y directions to produce the
smooth line.

7.How the straight line is drawn in Digital Display Devices?


 Digital devices display a straight line segment by plotting discrete points between
the two endpoints.
 Discrete coordinate positions along the line path are calculated from the equation
of the line.

8.What is Stair step Effect (jaggies)?


 For a raster video display, the line color (intensity) is then loaded into the frame
buffer at the corresponding pixel coordinates.
 Reading from the frame buffer, the video controller then "plots" the screen pixels.
 Screen locations are referenced with integer values.
 So plotted positions may only approximate actual Line positions between two
specified endpoints.

PREPARED BY S.PRABHU AP/CSE KVCET

214
PRABHU.S

9.How the pixel positions are referenced?


 Pixcel positions referenced by scan line number and column number.

10.What is getpixel ( ) function?


 Sometimes we want to be able to retrieve the current frame buffer intensity setting
for a specified location.
 We accomplish this with the low-level function
getpixel (x, y )

11.What are Line Equations?


slope-intercept equation
y= m.x + b
where

PREPARED BY S.PRABHU AP/CSE KVCET

215
PRABHU.S

12.What are Circle Equations?


General form :

Circle Equation in polar form

Circle midpoint method equation

.
13.What are Ellipse equations?
General Ellipse equation

Ellipse equation in polar form

Ellipse midpoint method equation

14.Define Ellipse. Or Properties or Ellipse.


 An ellipse is an elongated circle.
 An ellipse is defined as the set of points such that the sum of the distances from
two fixted positions (foci) is the same for all points.

PREPARED BY S.PRABHU AP/CSE KVCET

216
PRABHU.S

15.What are Major and Minor axes in Ellipse?


Major Axes
 The major axis is the straight line segment extending from one side of the ellipse
to the other through the foci.

Minor Axes
 The minor axis spans the shorter dimension of the ellipse, bisecting the major axis
at the halfway position (ellipse center) between the two foci.

16.What is Attribute parameter?


 Any parameter that affects the way a primitive is to be displayed is referred to as
an attribute parameter.
 Some attribute parameters, such as
color and
size

17.What are the basic attributes of line?


 Basic attributes of a straight line segment are its
type,
its width, and
its color.

18.What are the Line type attribute?


 Line-type attributes are
Solid Line
Dotted Line
Dashed Line
Dash-Dotted Line

PREPARED BY S.PRABHU AP/CSE KVCET

217
PRABHU.S

19.Define Direct storage scheme.


 With the direct storage scheme, whenever a particular color code is specified in an
application program, the corresponding binary value is placed in the frame buffer
for each-component pixel in the output primitives to be displayed in that color.
 A minimum number of colors can be provided in this scheme with 3 bits of storage
per pixel

20.Define Grayscale.
 With monitors that have no color capability, color functions can be used in an
application program to set the shades of gray, or grayscale, for displayed
primitives.
 Numeric values over the range from 0 to 1 can be used to specify grayscale levels,
which are then converted to appropriate binary codes for storage in the raster.
 This allows the intensity settings to be easily adapted to systems with differing
grayscale capabilities.

21.Tabulate the four level grayscale system

PREPARED BY S.PRABHU AP/CSE KVCET

218
PRABHU.S

22. What are Area-Fill Attributes?


 Options for filling a defined region include a choice between a solid color or a
patterned fill.
 These fill options can be applied to polygon regions or to areas defined with
curved boundaries.
 In addition, areas can be painted using various brush styles, colors, and
transparency parameters.

23.What are the types of fill styles?


Fill Styles
 Areas are displayed with three basic fill styles:
hollow with a color border,
filled with a solid color, or
filled with a specified pattern or design.

24.What are Character Attributes?


 The appearance of displayed characters is controlled by attributes such as
font,
size,
color, and orientation.

25.List out the styles of the characters.


 The characters in a selected font can also be displayed with assorted underlining
styles

PREPARED BY S.PRABHU AP/CSE KVCET

219
PRABHU.S

Bold face
Underline
Italics

26.What are the transformations available in 2D?


 The basic geometric transformations are
translation,
rotation, and
scaling.
 Other transformations that are often applied to objects include
reflection and
shear.

27.What is Translation?
 A translation is applied to an object by repositioning it along a straight-line path
from one coordinate location to another.

PREPARED BY S.PRABHU AP/CSE KVCET

220
PRABHU.S

28.What is Rotation?
 A two-dimensional rotation is applied to an object by repositioning it along a
circular path in the xy plane.
 To generate a rotation, we specify a rotation angle and the position (x1,y1) of the
rotation point (or pivot point) about which the object is to be rotated.

29.What is Scaling?
A scaling transformation alters the size of an object.
 This operation can be carried out for polygons by multiplying the coordinate
values (x, y) of each vertex by scaling factors sx, and sy, to produce the
transformed coordinates (x', y'):

30.What is differential scaling?


 When sx, and sy, are assigned the same value, a uniform scaling is produced that
maintains relative object proportions.
 Unique values for sx, and sy, result in a differential scaling.

PREPARED BY S.PRABHU AP/CSE KVCET

221
PRABHU.S

31.What is Reflection?
 A reflection is a transformation that produces a mirror image of an object.
 The mirror image for a two-dimensional reflection is generated relative to an axis
of reflection by rotating the object 180" about the reflection axis.

32.What is Shear?
 A transformation that distorts the shape of an object such that the transformed
shape appears as if the object were composed of internal layers that had been
caused to slide over each other is called a shear.
 Two common shearing transformations are those that shift coordinate x values and
those that shift y values.

PART B
1.Explain in detail Line Drawing algorithms with example.
2.Explain in detail Circle Drawing algorithms with example.
3.Explain in detail Ellipse Drawing algorithms .

PREPARED BY S.PRABHU AP/CSE KVCET

222
PRABHU.S

4.Exolain in detail about attributes of output primitives.


5.What are the 2D transformations available? Explain any two.
6. Exolain in detail about 2D viewing.
7.What is Line Clipping? Explain in detail Coheh-Southerland Line clipping.
8.What is polygon clipping? Explain in detail about Sutherland-Hodgeman Polygon
Clipping.

UNIT II
3D CONCEPTS
PART A

1.What is mean by Perspective?

Perspective : The appearance of things relative to one another as determined by their


distance from the viewer

2.What is Depth Cueing?


 Depth information is important so that we can easily identify, for a particular
viewing direction, which is the front and which is the back of displayed objects.

3.What are the types of Projections?


 There are two basic projection methods.
Parallel Projection
Perspective Projection

4.What is Parallel Projection?

PREPARED BY S.PRABHU AP/CSE KVCET

223
PRABHU.S

 In a parallel projection, coordinate positions are transformed to the view plane


along parallel Line.
A parallel projection preserves relative proportions of objects

5.What is Perspective Projection?


 For a perspective projection, object positions are transformed to the view plane
along lines that converge to a point called the projection reference point (or center
of projection).
 A perspective projection, on the other hand, produces realistic views but does not
preserve relative proportions.

6.What are the types of Parallel projections?


There are two types if parallel projections,
1. Orthographic parallel projection.
2. Oblique parallel projection

PREPARED BY S.PRABHU AP/CSE KVCET

224
PRABHU.S

7.Differenciate Parallel projection and Perspective projection.


Sn Parallel Projection

Perspective Projection

It does not preserves relative proportions

It preserves relative proportions of


objects.

of objects.

It does not give us a realistic


It gives us a realistic representation 3D
representation 3D dimensional objects dimensional objects

coordinate positions are transformed


to the view plane along parallel Line.

4.

All objects are same size.

object positions are transformed to the


view plane along lines that converge to a
point called the projection reference
point
Projections of distant objects are
smaller, and same size for closer
objects.

8.What is projection reference point?


 For a perspective projection, object positions are transformed to the view plane
along lines that converge to a point called the projection reference point (or center
of projection).

9.What are the types of 3D representations?


 Representation schemes for solid objects are often divided into two broad
categories,
3. Boundary representations
4. Space-partitioning representation

10.What are Boundary representations?


 Boundary representations (B-reps) describe a three-dimensional object as a set of
surfaces that separate the object interior from the environment.

PREPARED BY S.PRABHU AP/CSE KVCET

225
PRABHU.S

 Typical examples of boundary representations are polygon facets and spline


patches.

11. What is Space-partitioning representation?


 Space-partitioning representations are used to describe interior properties, by
partitioning the spatial region containing an object into a set of small, non
overlapping, contiguous solids (usually cubes).
 A common space-partitioning description for a three-dimensional object is an octree
representation.

12.What is Polygon Table?


 We specify a polygon surface with a set of vertex coordinates and associated
attribute parameters.
 An information for each polygon are placed into tables that are to be used in the
subsequent processing, display, and manipulation of the objects in a scene.

13.What are the types of Polygon tables?


 Polygon data tables can be organized into two groups:
3. geometric tables and
4. attribute tables.

14.What is Geometric table?


 It contain vertex coordinates and parameters to identify the spatial orientation of
the polygon surfaces.

PREPARED BY S.PRABHU AP/CSE KVCET

226
PRABHU.S

15.What is Attribute table?


 It includes parameters specifying the degree of transparency of the object and its
surface reflectivity and texture characteristics.

16.What are the lists created by the Geometric table?


 A convenient organization for storing geometric data is to create three lists:
4. a vertex table,
5. an edge table, and
6. a polygon table.

17.What is Polygon Mesh?


 Some graphics packages provide several polygon functions for modeling objects.
 A single plane surface can be specified with a function such as fillArea.
 But when object surfaces are to be tiled, it is more convenient to specify the
surface facets with a mesh function.

18.What is Triangle strip?


 One type of polygon mesh is the triangle strip.

 This function produces n - 2 connected triangles, .as shown in above figure.

PREPARED BY S.PRABHU AP/CSE KVCET

227
PRABHU.S

19.What is Quadrilateral mesh?


 Another similar function is the quadrilateral mesh.

 which generates a mesh of (n - 1) by (m - 1) quadrilaterals, given the coordinates


for an n by m array of vertices.
 Above figure shows 20 vertices forming a mesh of 12 quadrilaterals.

20.What is Spline?
 A spline is a flexible strip used to produce a smooth curve through a designated set
of points.
 Several small weights are distributed along the length of the strip to hold it in
position on the drafting table as the curve is drawn.
 The term spline curve originally referred to a curve drawn in this manner.

PREPARED BY S.PRABHU AP/CSE KVCET

228
PRABHU.S

21.What are the Spline specifications?


Spline Specifications
 There are three equivalent methods for specifying a particular spline representation:
4. We can state the set of boundary conditions that are imposed on the spline; or
5. We can state the matrix that characterizes the spline; or
6. We can state the set of blending functions (or basis functions) that determine
how specified geometric constraints on the curve are combined to calculate
positions along the curve path.

22.Give some examples of scalar quantities.


energy,
temperature,
pressure,
frequency.

23.Give some examples of vector quantities.

velocity,
force,
electric fields,
electric current.

24. Define Pseudo-color method.


 Pseudo-color methods are also used to distinguish different values in a scalar data
set, and color-coding techniques can be combined with graph and chart methods.

PREPARED BY S.PRABHU AP/CSE KVCET

229
PRABHU.S

 To color code a scalar data set, we choose a range of color and map the range of
data values to the color range.
 For example, blue could be assigned to the lowest scalar value, and red could be
assigned to the highest value.

25.What are the translations available in 3D?


The basic transformations are
1. Translation
2. Scaling
3. Rotation
And other tow transformations are
1. Shear
2. Reflection

26.What is 3D Translation?
 In a three-dimensional homogeneous coordinate representation, a point is
translated
from position P = (x, y, z) to position P' = (x', y', z') with the matrix operation

PREPARED BY S.PRABHU AP/CSE KVCET

230
PRABHU.S

27.What is 3D Rotation?
 To generate a rotation transformation for an object, we must designate an axis of
rotation (about which the object is to be rotated) and the amount of angular
rotation.

28.What is 3D Shear?
 Shearing transformations can he used to modify object shapes.
 They are also useful in three-dimensional viewing for obtaining general projection
transformations.
 In two dimensions, we discussed transformations relative to the x or y axes to
produce
distortions in the shapes of objects.
 In three dimensions, we can also generate shears relative to the z axis.

29.What are Visible Surface Detection Methods?


A major consideration in the generation of realistic graphics displays is identifying
those parts of a scene that are visible from a chosen viewing position.
 Some methods require more memory, some involve more processing time, and
some apply only to special types of objects.
 The various algorithms are referred to as visible-surface detection methods.

PREPARED BY S.PRABHU AP/CSE KVCET

231
PRABHU.S

 Sometimes these methods are also referred to as hidden-surface elimination


methods.

30.What are the visible-surface detection methods? <write any 4>


12. Back-face detection
13. Depth-buffer method
14. A-buffer method
15. Scan-line method
16. Depth-sorting method
17. BSP-tree method
18. Area-subdivision b1ethod
19. Octree methods
20. Ray-casting method
21. Curved surfaces
22. wireframe methods

31.What is z-buffer method?


 A commonly used image-space approach to detecting visible surfaces is the depthbuffer method.
 Which compares surface depths at each pixel position on the projection plane?
 This procedure is also referred to as the z-buffer method.

32.What is A-Buffer method?


 An extension of the ideas in the depth-buffer method is the A-buffer method.
 The A- buffer method represents an antialiased, area-averaged, accumulationbuffer method.

PREPARED BY S.PRABHU AP/CSE KVCET

232
PRABHU.S

PART B

1.Wht is projection? Explain it with types.


2.How to represent Polygon surfaces in 3D?
3.How to represent Curved surfaces in 3D?
4.How to visualize data sets?
5. What are the 3D transformations available? Explain any two.
6.What is viewing pipeline and viewing coordinate?
7.What are the types of visible-surface detection methods? Explain any two.

PREPARED BY S.PRABHU AP/CSE KVCET

233
PRABHU.S

UNIT III
GRAPHICS PROGRAMMING

PART A
1.What is Color Model?

 A color model is a method for explaining the properties or behavior of color within some
particular context.

2.What are the uses of Chromaticity diagram?

 Comparing color gamuts for different sets of primaries.


 Identifying complementary colors.
 Determining dominant wavelength and purity of a given color.
3.Draw the RGB Unit cube.

4.Draw the CMY color model unit cube.

PREPARED BY S.PRABHU AP/CSE KVCET

234
PRABHU.S

5. What is mean by Subtractive Process?

 In CMY color model, cyan can be formed by adding green and blue light.
 Therefore, when white light is reflected from cyan-colored ink, the reflected light
must have no red component.
 That is, red light is absorbed, or subtracted, by the ink.
6. What are the dots used in Printing Proceses?
 The printing process often used with the CMY model generates a color point with a
collection of four ink dots, (like RGB monitor uses a collection of three phosphor dots).

Three dots are used for each of the primary colors (cyan, magenta, and yellow).

And one dot is black.

7. Why the block dot is included in Printing Process?


 A black dot is included because the combination of cyan, magenta, and yellow inks
typically produce dark gray instead of black.

8. How to convert RGB into CMY?


 We can express the conversion from an RGB representation to a CMY representation
with the matrix transformation

PREPARED BY S.PRABHU AP/CSE KVCET

235
PRABHU.S

 Where the white is represented in the RGB system as the unit column vector.
9. How to convert CMY into RGB?
 we convert from a CMY color representation to an RGB representation
with the matrix transformation

 Where black is represented In the CMY system as the unit column vector.

10.Draw the HSV hexcone.

11. What is Computer Animation?

PREPARED BY S.PRABHU AP/CSE KVCET

236
PRABHU.S

 Computer animation generally refers to any time sequence of visual changes in a


scene.
 In addition to changing object position with translations or rotations, a computergenerated animation could display time variations in object size, color,
transparency, or surface texture.

12. What are the steps for designing animation sequences?

 In general, an animation sequence is designed with the following steps:


5. Storyboard layout
6. Object definitions
7. Key-frame specifications
8. Generation of in-between frames

13. What is Storyboard Layout?

 The storyboard is an outline of the action.


 It defines the motion sequence as a set of basic events that are to take place.
 Depending on the type of animation to be produced, the storyboard could consist
of a set of rough sketches or it could be a list of the basic ideas for the motion.

14. Object Definition


 An object definition is given for each participant in the action.
 Objects can be defined in terms of basic shapes, such as polygons or splines.
 In addition, the associated movements for each object are specified along with the
shape.

PREPARED BY S.PRABHU AP/CSE KVCET

237
PRABHU.S

15. Keyframe
 A keyframe is a detailed drawing of the scene at a certain time in the animation
sequence.
 Within each key frame, each object is positioned according to the time for that
frame.
 Some key frames are chosen at extreme positions in the action.
16. Generation of in-between frames
 In-betweens are the intermediate frames between the key frames.
 The number of in-betweens needed is determined by the media to be used to
display the animation.
 Film requires 24 frames per second, and graphics terminals are refreshed at the rate
of 30 to 60 frames per second.

17.What is Morphing?
 Transformation of object shapes from one form to another is called morphing,
 Which is a shortened form of metamorphosis.
 Morphing methods can he applied to any motion or transition involving a change
in shape.
18. What are the matrices in Graphics pipeline of OpenGL?
 The important three matrices are
iv.

Model view matrix

v.

Projection matrix

vi.

Viewport matrix

19.What is ModelView matrix?

 The model view matrix is a single matrix in the actual pipeline.


 It combines two effects.

PREPARED BY S.PRABHU AP/CSE KVCET

238
PRABHU.S

sequence of modelling transformations applied to objects


transformation that orients and positions the camera in space.
 the model matrix is in fact the produced VM.
VM = Viewing matrix.
M = modelling matrix.

20.What is Projection matrix?


 It scales and shifts each vertex n a particular way.
 So that all those vertices that inside the view volume will inside a standard cube.
 The projection matrix effectively squashes the view volume into the cube centred
at the origin.
 The projection matrix also reverse the sense of the z-axis.
21.What is Viewport matrix?

 viewport matrix maps the surviving portion of the block into a 3D viewport.
 This matrix maps the standard cube into a block shape.
 Whose x & y values extend across the viewport.
 Whose 2 component extends from 0 to 1

22.List out the model view matrix transformation functions.


glTranslatef ()
glRotatef ()
glScalef ()

PREPARED BY S.PRABHU AP/CSE KVCET

239
PRABHU.S

23.List out the Projection matrix transformation functions.


glFrustum ()
gluPerspective ()
glortho ()
gluOrtho2D ()

24.What are the 2D transformation functions in OpenGL?


 glscaled (sx, sy, 1:0):
 glTranslated(dx, dy, 0):
 glRotated (angle, 0, 0, 1):
PART B

1.What is color model? Explain any two.


2.What is animation? Explain design animation sequences.
3.Explain in detail about OpenGL programming.
4.How basic graphics primitives are achieved in OpenGL?
5.How to draw 3D scenes in OpenGL?
6.How to draw 3D objecsts in OpenGL?

PREPARED BY S.PRABHU AP/CSE KVCET

240
PRABHU.S

UNIT IV
RENDERING

UNIT IV
1.What is Shading Model?
 A shading model dictates how light is scattered or reflected from a surface.
 A shading model frequently used in graphics in two types of light sources.
Point light sources
Ambient light

2.How many ways the incident light interact with the surfaces?
 The incident light interact with the surface in three different ways.
Some is absorbed by the surface and converted into heat.
Some is reflected from the surface.
Some is transmitted into the interior of the object, as in the case of piece of the
glass.

3.What is Black body?


 If all the incident light is absorbed, the object appears black and is known as black
body.

4.What are the types of reflection of incident light?


 There are two types of reflection of incident light.
Diffuse scattering
Specular reflection

PREPARED BY S.PRABHU AP/CSE KVCET

241
PRABHU.S

5.What is Diffuse Scattering?


 It occurs when some of the incident light penetrates the surface slightly and is reradiated uniformly in all directions.
 Scattered light interacts strongly with the surface, so its color is usually affected by
the nature of the material out of which the surface is made.

6.What is Lamberts law?


 The area subtended is now only the fraction cos ().
 So the brightness of S is reduced by that same fraction.
 This relationship between brightness and surface irentation is often called
Lamberts law.

7.What is diffuse reflection coefficient?


 For the internsity of the diffuse component, we can adopt the expression.
Id = Ispd * s.m/ |s||m|
 Where Is is the intensity of the light source.
 pd is the diffuse reflection coefficient
8.What is Specular reflection?
 Real objects do not scatter objects uniformly in all directions.
 So specular component is added to the shading model.
 Specular reflection causes highlights, which can add significantly to the realism of
a picture when objects are shiny.

PREPARED BY S.PRABHU AP/CSE KVCET

242
PRABHU.S

9.What are the commands used in OpenGL for shadings?


 Flat shading is established in OpenGL by using the command.
glShadeModel(GL_FLAT);
 Gouraud shading is established in OpenGL with the use of the function.
glShadeModel(GL_SMOOTH);

10.What are the types of Shading Models?


There are two main types
Flat Shading
Smooth Shading

Shading Model

Flat Shading

Smooth Shading

Gouraud Shading

Phong Shading

11.What is lateral inhibition?


 Edges between faces actually appear more pronounced that they would be on an
actual physical object, due to the phenomenon in the eye known as lateral
inhibition.

12.What is Gouraud Shading?


 Computationally speaking, Gouraud shading is modestly more expensive than flat
shading.
 Gouraud shading is established in OpenGL with the use of the function.

o glShadeModel(GL_SMOOTH);

PREPARED BY S.PRABHU AP/CSE KVCET

243
PRABHU.S

13. What is Phong Shading?

 Greater realism can be achieved.


 Particularly with regard to highlights on shiny objects.
 This is done by approximations of the normal vector to the face at each pixel.
 This type of shading is called Phong Shading.
14.What are the drawbacks of Phong shading?

 Phong Shading is relatively slow speed.


 More computation is required per pixel.
 Phong shading can take six to eight times longer than Gouraud Shading.
15.Why OpenGL is not setup to do Phong Shading?

 Because it applied the shading model once per vertex right after the modelview
transformation

 Normal vector information is not passed to the rendering stage following the
perspective transformation and division.

15.What is Texture?
A texture can be uniform, such as a brick wall, or irregular, such as wood grain or
marble.
 The realism of an image is greatly enhanced by adding surface texture to the
various faces of a mesh object.
16.What are the types of Texture?
 There are numerous sources of textures.
 The most common textures are
Bitmap textures
Procedural texture

PREPARED BY S.PRABHU AP/CSE KVCET

244
PRABHU.S

17.What is the basic function used in Texture?


 Basic function is
texture (s, t)
 This function produces a color or intensity value for each value of s and t
between 0 and 1.

18.What is Bitmap textures?


 Textures are often formed from bit representation of images such as digitized
photo, clip art or image can be previously in some program.

19.What are Texels?


 Texels formed from bitmap consists of an array, say textr[c][r] of color values
often called Texels.
 If the array has c columns and r rows, the indices c and r varies from 0 to c-1
and 0 to r-1respectively.

20. What is Procedural Texture?


 Alternatively we can define a texture by mathematical function or procedure
example, the following spherical shape

PREPARED BY S.PRABHU AP/CSE KVCET

245
PRABHU.S

21.Write a OpenGL program to generate the Procedural Texture.


float fakeshape (float s, float t)
{
float r = sqrt((s-0.5) * (s-0.5) + (t-0.5) * (t-0.5))
if (r<0.3)
return 1-4/0.3

//sphere intensity

return 0.2

//dark background

else

22.What are the advantages of adding shadows?


 Shadows make an image much more realistic.
 It shows how the objects are positioned with respect to each other.
 Using it we can identify the position of light source.
23. What is Shadow Buffer?
 Different methods for drawing shadow uses a variant of the depth buffer, that
performs the removal of hidden surfaces.
 In this method, an auxiliary second depth buffer called shadow buffer, is
employed for each light sources. This recovers lot of memory.

24.What is meant by Sliding the Camera?

 Sliding a camera means to move it along of its own axes.


 That is, in the u, v or n direction, without rotating it.
movement along n :

forward or backward

movement along u :

left or right

movement along v :

up or down

PREPARED BY S.PRABHU AP/CSE KVCET

246
PRABHU.S

25. What is meant by Rotating the Camera?

 We want to roll, pitch or yaw the camera.


 Each of these involves a rotation of the camera about one of its own axes.
 To roll the camera is to rotate it about its won n-axis.
26.What are the shading methods available?
7. Circulism:
8. Blended circulism:
9. Dark Blacks:
10. Loose Cross Hatching:
11. Tight Cross Hatching:
12. Powder Shading:

PART B

1. What is shading model? Explain it with types.


2. What is texture? How to add texture to faces?
3. Explain about the adding shadows to objects.
4. How to build a camera in your program?
5. What are the methods to create shaded objects?
6. Explain about drawing shadows.

PREPARED BY S.PRABHU AP/CSE KVCET

247
PRABHU.S

UNIT V
FRACTALS
1.What is Fractal?

 is a rough or fragmented geometric shape that can be split into parts.


 Each of which is a reduced copy of whole.
 Such a property is called Self Similarity.
2.What is Self Similarity?

 Self Similarity is a typical property of fractals.


 A self similar object is exactly or approximately similar to a part of itself.

3.What is Koch Curve?

 Very complex curves can be furnished recursively by repeatedly refining a simple


curve.

 The simplest example perhaps is the Koch curve.


 That is discovered by mathematician Helge Von Koch.
 This curve stirred great interest in the mathematical world because it produces an
infinitely long line within a region of finite area.

4.What is Koch Snowflake?

 It is formed out of three Koch Curves joined together.


 The perimeter of the ith generation shape Si is the three times the length of a simple Koch
Curve, and S0 is 3(4/3)i

5.What are peano curves?


Peano curves are the fractal like structures that are drawn through a recursive process.

PREPARED BY S.PRABHU AP/CSE KVCET

248
PRABHU.S

6.What are the types of peano curves?

 The two most famous peano curves are Hilbert and Sierpeniski curves.
 Some low order Hilbert curves are shown below

7. How the S-Copier make the images?

 It contains 3 lenses.
 Each of which reduces the input image to one-half its size.

PREPARED BY S.PRABHU AP/CSE KVCET

249
PRABHU.S

 And move it to a new position.


 These three reduced and shifted images and superposed on the printed output.
 Scaling and shifting are easily done by affine transformations.
8.What are Mandelbrot sets?

 The Mandelbrot set is a mathematical set of points, whose boundary generates a


distinctive and easily recognisable two dimensional fractal shape.

 The set is closely related to the Julia Set.


 It generates similarly complex shapes.
 This is named after the mathematician Benoit Mandelbrot.
9.What is Iteration Theory

 Julia and Mandelbrot sets arises from a branch of analysis known as iteration
theory (or dynamical systems theory)

 This theory asks what happens when one iterates a function endlessly.
10.What are Random Fractals?

 Fractal shapes are completely deterministic.


 They are completely predictable (even though they are very complicated)
 In graphics the term fractal has become widely associated randomly generated
curves and surfaces that exhibit a degree of self similarity.
 These curves are used to produce Naturalistic shapes for representing objects
such as ragged mountains, grass and fire.

11.What are the stages of Fractalization?

 There are three stages of fractalization.

PREPARED BY S.PRABHU AP/CSE KVCET

250
PRABHU.S

First Stage:
 The midpoint of AB is perturbed to form point C.
Second Stage:
 Each of the two segments has its midpoint perturbed to form points D and E.
Third Stage:
 At final stage, new points F...........I are added.

12. How to Calculate fractalization in a program?

 Line L passes through the midpoint M of segment S and is perpendicular to it.


 And point C along L has the parametric form
C(t) = M + (B-A) t

Where M = (A+B)/2

 The most fractal curves, t is modelled as a Gaussian random variable with mean
and some standard derivation.

13. Write the program for drawing a Fractal Curve.


double MinLenSq, factor; //global variables
void drawfractal(Point2 A, Point2 B)
{
double beta, stdDev;
factor = pow(2.0,(1.0-beta).2.0);
cvs.moveTo(A);
fract(A, B, stdDev);

PREPARED BY S.PRABHU AP/CSE KVCET

251
PRABHU.S

14.What is Ray Tracing?

 Ray Tracing is a technique for generating an image by tracing the path of light
through pixels in an image plane.

 And simulating the effect of its encounters with virtual objects.


 This technique is used to produce a very high degree of visual realism.
 Usually higher than the Scanline rendering
 But greater computational cost.

15.What are the features of Ray Tracing?

 Some of the interesting visual effects are easily incorporated


Shadowing
Reflection
Refraction

16.How to compute hit point and color in Ray Tracing?


Computing hit point:

 When objects have been tested, the object with the smallest is the closest to the
location of the hit point on the objects found.

Computing Color:

 The colour of the light that receiving by object, in the direction of the eye is
computed and stored in the pixel.

17.What is object list in Ray Tracing method?

PREPARED BY S.PRABHU AP/CSE KVCET

252
PRABHU.S

 Descriptions of all the objects are stored in an object list.


 This is a linked list of descriptive records as shown below.

 The ray that is shown intersects a sphere, cylinder and two cones.
 All the other objects are missed.
18.What is Solid Texture?

 Solid texture is sometimes called 3D texture.


 The object is considered to be carved out of a block of solid material that itself has
texturing.

 The ray tracer reveals the colour of the texture at each point on the surface of the
object.

19.What is called Compund Objects? Or


What is called Boolean Objects?
What is called CSG Objects?

 Complex shapes are defined by set operations (also called Boolean operations) on
simpler shapes.

 Objects such as lenses a hallow fishbowls are easily formed by combining the
generic shapes.

 Such objects are variously called compound objects (or) Boolean objects (or) CSG
objects.

PREPARED BY S.PRABHU AP/CSE KVCET

253
PRABHU.S

20.What are the three Boolean operators?


Union
Intersection
Difference

PART B

1. How create an image by iterated functions?


2. What are Mandelbrot sets and Julia sets?
3. Wxplain about Random fractals.
4. Explain in detail about ray tracing?
5. Explian about reflections and transparency.
6. How the Boolean operations are applied to objects?

PREPARED BY S.PRABHU AP/CSE KVCET

254
PRABHU.S

ROAD MAP
UNIT I
Output primitives
Definition
Simple geometric components
Additional output primitives

Line
Stair step effect (jaggies)

Line-drawing algorithms
Slope-intercept equation
DDA algorithm
Bresenham algorithm
derivations
algorithm
problem
Circle algorithm
General form
Polar form
Midpoint circle algorithm
Theory & derivation
Algorithm
Problem
Ellipse algorithm
General form
Polar form

PREPARED BY S.PRABHU AP/CSE KVCET

255
PRABHU.S

Midpoint ellipse algorithm


Theory & derivation
Algorithm
Problem

Attributes of output primitives


Attribute parameter
Line attributes
Type,

o Solid lines,
o Dashed lines,
o And dotted lines
Width

color.

Pen and brush options


Diagram
Line color
Curve attributes
Color and grayscale levels
Direct storage scheme
Table 8 color code
Grayscale
Table(4 level grayscale level)
Area-fill attributes
Fill styles
Character attributes
Font

PREPARED BY S.PRABHU AP/CSE KVCET

256
PRABHU.S

Size
Color
Diagram

2D transformation
Translation,
Diagram
Equation
Matrix format
Rotation
Diagram
Equation
Matrix format
Scaling
Diagram
Equation
Differential scaling.
Matrix format
Reflection
Diagrams
Definition
Matrix format

Shear
Diagram
Equation

PREPARED BY S.PRABHU AP/CSE KVCET

257
PRABHU.S

Matrix format
Transformation functions
Translate (trans-atevector, matrixtranslate)
Rotate (theta, matrixrotate)
Scale (scalevector, matrixscale)
composematrix (matrix2, matrix1, matrixout)

UNIT II

3D Concepts
Depth cueing

Projections
Parallel projection
Diagram
Orthographic parallel projection
oblique parallel projection
diagrams
Perspective projection
Diagrams
Equations

3D Representation
Boundary representations
Space-partitioning representation
Polygon surfaces

PREPARED BY S.PRABHU AP/CSE KVCET

258
PRABHU.S

Polygon tables
Geometric tables
 Attribute tables
7. A vertex table,
8. An edge table, and
9. A polygon table.
Plane equations
Polygon meshes
Triangle strip
Quadrilateral mesh
Problem
Solution

Curved lines and surfaces


Quadric surfaces
Sphere
Ellipsoid
Spline
 Spline specifications
Visualization of data sets
Visual representations for scalar fields
diagrams
Pseudo-color methods
Visual representations for vector fields
Diagrams
Visual representations for tensor fields

PREPARED BY S.PRABHU AP/CSE KVCET

259
PRABHU.S

Diagrams

3D Transformation
Translation
Diagram
Equation
Matrix form

Rotation
Diagram
Coordinate-axes rotations
Equation
Matrix form

Scaling
Diagram
Equation
Matrix form

Shear
Diagram
Matrix form

Reflecetion
Diagram
Matrix

PREPARED BY S.PRABHU AP/CSE KVCET

260
PRABHU.S

Viewing pipeline
Diagram
Viewing coordinates
Diagram
Matrix

Visible surface detection


[ totally 11 methods ]
23. Back-face detection
Equations
Diagrams
24. Depth-buffer method
Algorithm
25. A-buffer method

UNIT III
Color models
Chromaticity diagram
Colors representation
Diagram
Uses of chromaticity diagram

PREPARED BY S.PRABHU AP/CSE KVCET

261
PRABHU.S

RBG color model


Equation
Unit cube diagram
Explanation
Additive model

YIQ color model


Explanation
NTSC signals
RGB into yIQ
YIQ into RGB

CMY color model


Video monitors vs printers, plotters:
Subtractive process
Unit cube diagram
Explanation
Printing process
Conversion of RGB into CMY
Conversion of CMY into RGB
HSV color model
Explanation
Diagrams

PREPARED BY S.PRABHU AP/CSE KVCET

262
PRABHU.S

Animation
Design of animation sequences
9. Storyboard layout
10. Object definitions
11. Key-frame specifications
12. Generation of in-between frames

Raster animations
Explanation
Diagrams

Key-frame systems
Morphing
Diagrams

OPENGL
Advantages:
Features:
OpenGL operation
Diagram
Glut
Sample program
Glut functions
Basic graphics primitives
Sample code
Format of OpenGL commands
OpenGL data types

PREPARED BY S.PRABHU AP/CSE KVCET

263
PRABHU.S

Sample code
Other graphics primitives in OpenGL
Example

Drawing 3d scenes with OpenGL


Viewing process & graphics pipeline
Diagrams
Important matrices
vii.

Model view matrix

viii.

Projection matrix
Diagram

ix.

Viewport matrix
Diagram

Drawing three dimensional objects


3d viewing pipeline
Diagram
OpenGL functions for setting up transformations
3d viewing model view matrix
Sample code

PREPARED BY S.PRABHU AP/CSE KVCET

264
PRABHU.S

UNIT IV
Introduction to Shading Model
Light sources
Black body
types of reflection
Diffuse scattering
 Computing the diffuse component
 Diagram
 Lamberts Law
 diffuse reflection coefficient
Specular reflection

Flat Shading
OpenGL function
Diagram
lateral inhibition

Smooth Shading
Types
3. Gouraud Shading
i. OpenGL function
ii. Diagrams
4. Phong Shading
i. Diagrams
ii. Drawback:

PREPARED BY S.PRABHU AP/CSE KVCET

265
PRABHU.S

Adding texture to faces


Diagram
Functions
Types
Bitmap textures
Texels
Procedural texture
Fakeshape() function

Pasting The Texture On To A Flat Surface


Sample code
Diagram
Mapping a square to rectangle:
Diagrams

Adding Shadows Of Objects


Diagrams
advantages
Shadow Buffer

Building a Camera in a Program


Camera functions
Camera class
setModelViewMatrix() function
Sliding the Camera
Rotating the Camera

PREPARED BY S.PRABHU AP/CSE KVCET

266
PRABHU.S

Creating Shaded Objects


Diagrams
Shading Methods
1. Circulism
2. Blended circulism
3. Dark Blacks
4. Loose Cross Hatching
5. Tight Cross Hatching
6. Powder Shading

Rendering The Texture


Diagram
Explanation

Drawing Shadows
Diagram
Explanation
Steps
Step 1: Activate and position the shadows
Step 2: Draw the Shadows Only
Step 3: Draw the Shadows from Above
Step 4: Soften the Shadows
Step 5: Create a new Shadow Material
Step 6: Apply the material to a ground polygon
Shadow Mapping:

Advantages:
Disadvantages

PREPARED BY S.PRABHU AP/CSE KVCET

267
PRABHU.S

UNIT V

Fractals & self similarity


Fractal:
Self similarity:
Self- similar curves:
Exactly self similar:
Statistically self similar:
Mandelbrot:
Koch curve:

o Two generations of the koch curve


o Koch snowflake

Peano curves (or) space-filling curves


Diagram
Example

Creating an images iterated functions


Experimental copier
Sierpinski copier

o Diagrams

Mandelbrot sets
Iteration theory
Mandelbrot sets and iterated function systems
Diagram

PREPARED BY S.PRABHU AP/CSE KVCET

268
PRABHU.S

Julia sets
Diagrams
Drawing filled-in julia sets

Random fractals
Fractalizing a segment

o Diagram
Stages of fractalization

o First stage:
 Diagram

o Second stage:
 Diagram

o Third stage:
 Diagram
Calculation of fractalization in a program
Diagram
Equation
Fract() function
Drawing a fractal curve
Drawfractal() function

Ray tracing
Introduction
Diagram
Reverse process
Features of ray tracing:

PREPARED BY S.PRABHU AP/CSE KVCET

269
PRABHU.S

Overview of the ray-tracing process


Sample code
Computing hit point:
Computing color:

o Diagram
Object list:

o Diagram

Adding surface texture


Solid texture
Example:
Diagram

Reflections & transparency


Diagrams
Equation
Local component
The refraction of light:

o Diagram
o Equation

Boolean operations on objects


Csg objects.
Boolean operators
Union
Intersection
Difference

PREPARED BY S.PRABHU AP/CSE KVCET

270
PRABHU.S

Diagram
Equations
Union of four primitives
Diagram
Equations

PREPARED BY S.PRABHU AP/CSE KVCET

271
PRABHU.S

DIAGRAMS

UNIT I
Stair step Effect (jaggies)

Line

y= m.x + b

Circle

PREPARED BY S.PRABHU AP/CSE KVCET

272
PRABHU.S

Ellipse

Line Types

PREPARED BY S.PRABHU AP/CSE KVCET

273
PRABHU.S

Pen Brush Options

Fill Styles

Hatch Fill

PREPARED BY S.PRABHU AP/CSE KVCET

274
PRABHU.S

Character attribute

UNIT II
Parallel Projection

PREPARED BY S.PRABHU AP/CSE KVCET

275
PRABHU.S

Perspective Projection

PREPARED BY S.PRABHU AP/CSE KVCET

276
PRABHU.S

Polygon Table

PREPARED BY S.PRABHU AP/CSE KVCET

277
PRABHU.S

Triangle strip

Quadrilateral mesh

Sphere

PREPARED BY S.PRABHU AP/CSE KVCET

278
PRABHU.S

ELLIPSOID

SPLINE

Translation

PREPARED BY S.PRABHU AP/CSE KVCET

279
PRABHU.S

Rotation

Scaling

PREPARED BY S.PRABHU AP/CSE KVCET

280
PRABHU.S

 This sequence of transformations is demonstrated in following fig.

Reflection

PREPARED BY S.PRABHU AP/CSE KVCET

281
PRABHU.S

Shear

Viewing Pipeline

Viewing Coordinates

PREPARED BY S.PRABHU AP/CSE KVCET

282
PRABHU.S

Transformation from World to Viewing Coordinates

PREPARED BY S.PRABHU AP/CSE KVCET

283
PRABHU.S

UNIT III

Chromaticity diagram

RGB color Model

PREPARED BY S.PRABHU AP/CSE KVCET

284
PRABHU.S

CMY COLOR MODEL

HSV COLOR MODEL

PREPARED BY S.PRABHU AP/CSE KVCET

285
PRABHU.S

Morphing

PREPARED BY S.PRABHU AP/CSE KVCET

286
PRABHU.S

Opngl operation:

PREPARED BY S.PRABHU AP/CSE KVCET

287
PRABHU.S

Other Graphics Primitives in OPENGL

DRAWING 3D SCENES WITH OPENGL

PREPARED BY S.PRABHU AP/CSE KVCET

288
PRABHU.S

Projection matrix

Viewport matrix

PREPARED BY S.PRABHU AP/CSE KVCET

289
PRABHU.S

DRAWING THREE DIMENSIONAL OBJECTS

3D Viewing Model View Matrix

PREPARED BY S.PRABHU AP/CSE KVCET

290
PRABHU.S

UNIT IV

Flat Shading

Gourad Shading

PREPARED BY S.PRABHU AP/CSE KVCET

291
PRABHU.S

Phong Shading

ADDING TEXTURE TO FACES:

PREPARED BY S.PRABHU AP/CSE KVCET

292
PRABHU.S

PROCEDURAL TEXTURE

PASTING THE TEXTURE ON TO A FLAT SURFACE:

MAPPING A SQUARE TO RECTANGLE

PREPARED BY S.PRABHU AP/CSE KVCET

293
PRABHU.S

ADDING SHADOWS OF OBJECTS

PREPARED BY S.PRABHU AP/CSE KVCET

294
PRABHU.S

Rotating the Camera

CREATING SHADED OBJECTS

PREPARED BY S.PRABHU AP/CSE KVCET

295
PRABHU.S

RENDERING THE TEXTURE

PREPARED BY S.PRABHU AP/CSE KVCET

296
PRABHU.S

DRAWING SHADOWS

PREPARED BY S.PRABHU AP/CSE KVCET

297
PRABHU.S

UNIT V

TWO GENERATIONS OF THE KOCH CURVE

THE FIRST FEW GENERATIONS OF THE KOCH SNOWLAKE

KOCH SNOFLAKE, S3, S4, AND S5

PREPARED BY S.PRABHU AP/CSE KVCET

298
PRABHU.S

PEANO CURVES (OR) SPACE-FILLING CURVES

PREPARED BY S.PRABHU AP/CSE KVCET

299
PRABHU.S

Low order Hilbert curve

Experimental Copier

PREPARED BY S.PRABHU AP/CSE KVCET

300
PRABHU.S

Sierpinski Copier

S-Copier Iterated output

PREPARED BY S.PRABHU AP/CSE KVCET

301
PRABHU.S

MANDELBROT SETS

IFS System

PREPARED BY S.PRABHU AP/CSE KVCET

302
PRABHU.S

JULIA SETS

Random Fractal

First Stage

PREPARED BY S.PRABHU AP/CSE KVCET

303
PRABHU.S

Second Stage

Third Stage

Ray Tracing

PREPARED BY S.PRABHU AP/CSE KVCET

304
PRABHU.S

Compute color in Ray Tracing

Object list

INTERSTING OF A RAY WITH AN OBJECT

PREPARED BY S.PRABHU AP/CSE KVCET

305
PRABHU.S

ADDING SURFACE TEXTURE

Reflection and Transparency

PREPARED BY S.PRABHU AP/CSE KVCET

306
PRABHU.S

Contributions of light at each part and its tree structure

The refraction of light:

PREPARED BY S.PRABHU AP/CSE KVCET

307
PRABHU.S

Compound Objects built from sphere

Union of 4 primitives

.THE END..

PREPARED BY S.PRABHU AP/CSE KVCET

Potrebbero piacerti anche