Sei sulla pagina 1di 325

UNIT- I

2D PRIMITIVES

Line and Curve Drawing Algorithms

Line Drawing

y=m.x+b
yend y0 x0

m = (yend y0) / (xend x0) b = y0 m . x0


xend

DDA Algorithm
yend y0 x0 xend

if |m|<1 xk+1 = xk + 1 yk+1 = yk + m if |m|>1 yk+1 = yk + 1 xk+1 = xk + 1/m

yend

y0 x0 xend

DDA Algorithm
#include <stdlib.h> #include <math.h> inline int round (const float a) { return int (a + 0.5); }

void lineDDA (int x0, int y0, int xEnd, int yEnd) { int dx = xEnd - x0, dy = yEnd - y0, steps, k; float xIncrement, yIncrement, x = x0, y = y0; if (fabs (dx) > fabs (dy)) steps = fabs (dx); /* |m|<1 */ else steps = fabs (dy); /* |m|>=1 */ xIncrement = float (dx) / float (steps); yIncrement = float (dy) / float (steps); setPixel (round (x), round (y)); for (k = 0; k < steps; k++) { x += xIncrement; y += yIncrement; setPixel (round (x), round (y)); } }

Bresenhams Line Algorithm

yk+1 yk

xk xk+1

yk+1 y yk xk xk+1

du dl

Bresenhams Line Algorithm


#include <stdlib.h> #include <math.h> /* Bresenham line-drawing procedure for |m|<1.0 */ void lineBres (int x0, int y0, int xEnd, int yEnd) { int dx = fabs(xEnd - x0), dy = fabs(yEnd - y0); int p = 2 * dy - dx; int twoDy = 2 * dy, twoDyMinusDx = 2 * (dy - dx); int x, y; /* Determine which endpoint to use as start position. */ if (x0 > xEnd) { x = xEnd; y = yEnd; xEnd = x0; } else { x = x0; y = y0; } setPixel (x, y); while (x < xEnd) { x++; if (p < 0) p += twoDy; else { y++; p += twoDyMinusDx; } setPixel (x, y); } }

Circle Drawing
r yc xc (x, y)

Pythagorean Theorem: x 2 + y 2 = r2 (x-xc)2 + (y-yc)2 = r2 (xc-r) x (xc+r) y = yc r2 - (x-xc)2

Circle Drawing

change x change y

Circle Drawing using polar coordinates


r
(xc, yc) (x, y)

x = xc + r . cos y = yc + r . sin change with step size 1/r

Circle Drawing using polar coordinates


r
(xc, yc) (x, y)

x = xc + r . cos y = yc + r . sin change with step size 1/r

(y, -x) (-x, y)

(y, x) (x, y) 450

use symmetry if >450

(xc, yc)

Midpoint Circle Algorithm


f(x,y) = x2 + y2 - r2
yk yk-1/2 yk-1

<0 if (x,y) is inside circle f(x,y) =0 if (x,y) is on the circle <0 if (x,y) is outside circle

xk

xk+1

use symmetry if x>y

Midpoint Circle Algorithm


#include <GL/glut.h> class scrPt { public: GLint x, y; }; void setPixel (GLint x, GLint y) { glBegin (GL_POINTS); glVertex2i (x, y); glEnd ( ); } void circleMidpoint (scrPt circCtr, GLint radius) { scrPt circPt; GLint p = 1 - radius; circPt.x = 0; circPt.y = radius; void circlePlotPoints (scrPt, scrPt); /* Plot the initial point in each circle quadrant. */ circlePlotPoints (circCtr, circPt); /* Calculate next points and plot in each octant. */ while (circPt.x < circPt.y) { circPt.x++; if (p < 0) p += 2 * circPt.x + 1; else { circPt.y--; p += 2 * (circPt.x - circPt.y) + 1; } circlePlotPoints (circCtr, circPt); } } void circlePlotPoints (scrPt circCtr, scrPt circPt); { setPixel (circCtr.x + circPt.x, circCtr.y + circPt.y); setPixel (circCtr.x - circPt.x, circCtr.y + circPt.y); setPixel (circCtr.x + circPt.x, circCtr.y - circPt.y); setPixel (circCtr.x - circPt.x, circCtr.y - circPt.y); setPixel (circCtr.x + circPt.y, circCtr.y + circPt.x); setPixel (circCtr.x - circPt.y, circCtr.y + circPt.x); setPixel (circCtr.x + circPt.y, circCtr.y - circPt.x); setPixel (circCtr.x - circPt.y, circCtr.y - circPt.x); }

OpenGL
#include <GL/glut.h> // (or others, depending on the system in use) void init (void) { glClearColor (1.0, 1.0, 1.0, 0.0); // Set display-window color to white. glMatrixMode (GL_PROJECTION); // Set projection parameters. gluOrtho2D (0.0, 200.0, 0.0, 150.0); } void lineSegment (void) { glClear (GL_COLOR_BUFFER_BIT); glColor3f (0.0, 0.0, 1.0); glBegin (GL_LINES); glVertex2i (180, 15); glVertex2i (10, 145); glEnd ( ); glFlush ( ); } void main (int argc, char** argv) { glutInit (&argc, argv); // Initialize GLUT. glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB); // Set display mode. glutInitWindowPosition (50, 100); // Set top-left display-window position. glutInitWindowSize (400, 300); // Set display-window width and height. glutCreateWindow ("An Example OpenGL Program"); // Create display window. init ( ); glutDisplayFunc (lineSegment); glutMainLoop ( ); } // Execute initialization procedure. // Send graphics to display window. // Display everything and wait.

// Clear display window. // Set line segment color to red. // Specify line-segment geometry.

// Process all OpenGL routines as quickly as possible.

OpenGL
Point Functions

glVertex*( );
* : 2, 3, 4 i (integer) s (short) f (float) d (double)

Ex:

glBegin(GL_POINTS); glVertex2i(50, 100); glEnd();

int p1[ ]={50, 100}; glBegin(GL_POINTS); glVertex2iv(p1); glEnd();

OpenGL
Line Functions

GL_LINES GL_LINE_STRIP GL_LINE_LOOP

Ex: glBegin(GL_LINES); glVertex2iv(p1); glVertex2iv(p2); glEnd();

OpenGL
glBegin(GL_LINES); glVertex2iv(p1); glVertex2iv(p2); glVertex2iv(p3); glVertex2iv(p4); glVertex2iv(p5); glEnd(); GL_LINES
p3 p1 p5

GL_LINE_STRIP
p3 p1

p2

p4

p2

p4

GL_LINE_LOOP
p3 p5 p1

p2

p4

Antialiasing
Supersampling
Count the number of subpixels that overlap the line path. Set the intensity proportional to this count.

Antialiasing
Area Sampling
Line is treated as a rectangle. Calculate the overlap areas for pixels. Set intensity proportional to the overlap areas.

80%

25%

Antialiasing
Pixel Sampling Micropositioning Electron beam is shifted 1/2, 1/4, 3/4 of a pixel diameter.

Line Intensity differences


Change the line drawing algorithm:


For horizontal and vertical lines use the lowest intensity For 45o lines use the highest intensity

2D Transformations with Matrices

Matrices

a1,1 A ! a2,1 a3,1




a1, 2 a2 , 2 a3, 2

a1, 3 a2 , 3 a3, 3

A matrix is a rectangular array of numbers. A general matrix will be represented by an upper-case italicised letter. The element on the ith row and jth column is denoted by ai,j. Note that we start indexing at 1, whereas C indexes arrays from 0.

Matrices Addition

Given two matrices A and B if we want to add B to A (that is form A+B) then if A is (nvm), B must be (nvm), Otherwise, A+B is not defined. The addition produces a result, C = A+B, with elements: Ci , j ! Ai , j  Bi , j

1 2 5 6 1  5 2  6 6 8 3 4  7 8 ! 3  7 4  8 ! 10 12

Matrices Multiplication

Given two matrices A and B if we want to multiply B by A (that is form AB) then if A is (nvm), B must be (mvp), i.e., the number of columns in A must be equal to the number of rows in B. Otherwise, AB is not defined. The multiplication produces a result, C = AB, with elements:
m

Ci , j ! aik bkj
k !1

(Basically we multiply the first row of A with the first column of B and put this in the c1,1 element of C. And so on).

Matrices Multiplication (Examples)

2v + 6v3+ 7v2=44

2 6 7 6 8 44 76 4 5 8 v 3 3 ! 55 95 9 2 3 2 6 66 96 6 8 2 6 v 3 3 4 5 2 6

Undefined! 2x2 x 3x2 2!=3

2x2 x 2x4 x 4x4 is allowed. Result is 2x4 matrix

Matrices -- Basics

Unlike scalar multiplication, AB BA Matrix multiplication distributes over addition: A(B+C) = AB + AC Identity matrix for multiplication is defined as I. The transpose of a matrix, A, is either denoted AT or A is obtained by swapping the rows and columns of A:
a1,1 A! a2,1 a1, 2 a2 , 2 a1,1 a1, 3 A' ! a1, 2 a2 , 3 a1,3 a2,1 a2 , 2 a2 , 3

2D Geometrical Transformations

Translate

Shear

Rotate

Scale

Translate Points

Recall.. We can translate points in the (x, y) plane to new positions by adding translation amounts to the coordinates of the points. For each point P(x, y) to be moved by dx units parallel to the x axis and by dy units parallel to the y axis, to the new point P(x, y ). The translation has x' ! x  d x the following form:
P(x,y)
dy dx

y' ! y  d y
In matrix format:

P(x,y)

x' x d x y ' ! y  d y
d x If we define the translation matrix T ! , then we have P =P + T. d y

Scale Points

Points can be scaled (stretched) by sx along the x axis and by sy along the y axis into the new points by the multiplications:
We can specify how much bigger or smaller by means of a scale factor To double the size of an object we use a scale factor of 2, to half the size of x' ! sx x an obejct we use a scale factor of 0.5
y syy y sxy x

P(x,y) P(x,y)
x

y' ! s y y

x' s x y ' ! 0
0 , then we have P =SP sy

0 x sy y

sx If we define S ! 0

Rotate Points (cont.)

Points can be rotated through an angle U about the origin:


| OP ' |!| OP |! l
y

P(x,y) l
E x x

P(x,y)

x ' !| OP ' | cos(E  U ) ! l cos(E  U ) ! l cos E cos U  l sin E sin U ! x cos U  y sin U y ' !| OP ' | sin(E  U ) ! l sin(E  U ) ! l cos E sin U  l sin E cos U ! x sin U  y cos U

x' cos U y ' ! sin U

 sin U x cos U y

P =RP

Review
  

Translate: Scale: Rotate:

P = P+T P = SP P = RP

Spot the odd one out

Multiplying versus adding matrix Ideally, all transformations would be the same..

easier to code


Solution: Homogeneous Coordinates

Homogeneous Coordinates

For a given 2D coordinates (x, y), we introduce a third dimension: [x, y, 1] In general, a homogeneous coordinates for a 2D point has the form: [x, y, W] Two homogeneous coordinates [x, y, W] and [x, y, W] are said to be of the same (or equivalent) if x = kx y = ky W = kW eg: [2, 3, 6] = [4, 6, 12] where k=2

for some k 0

Therefore any [x, y, W] can be normalised by dividing each element by W: [x/W, y/W, 1]

Homogeneous Transformations

Now, redefine the translation by using homogeneous coordinates: x ' 1 0 d x x dx x' x y ' ! 0 1 d y y y ' ! y  d y 1 0 0 1 1
P' ! T v P

Similarly, we have: Scaling


x' sx y ' ! 0 1 0
P =

Rotation

0 sy 0
S

0 x 0 y 1 1
v P

x ' cos U y ' ! sin U 1 0


P =

 sin U cos U 0
R v

0 x 0 y 1 1
P

Composition of 2D Transformations

1. Additivity of successive translations


We want to translate a point P to P by T(dx1, dy1) and then to P by another T(dx2, dy2) (d x 2 , d y 2 ) P ' ! T ( d x 2 , d y 2 )[T ( d x1 , d y1 ) P ] P' ' ! T On the other hand, we can define T21= T(dx1, dy1) T(dx2, dy2) first, then apply T21 to P: P ' ' ! T21 P

T21 ! T ( d x 2 , d y 2 )T ( d x1 , d y1 )
where

1 0 d x 2 1 0 d x1 ! 0 1 d y 2 0 1 d y1 0 0 1 0 0 1

1 0 d x1  d x 2 ! 0 1 d y1  d y 2 0 0 1

Examples of Composite 2D Transformations

T(-1,2) (2,1)

(1,3)

T(1,-1)

(2,2)

1 0 1 1 0  1 T21 ! 0 1  1 0 1 2 0 0 1 0 0 1 1 0 0 ! 0 1 1 0 0 1

Composition of 2D Transformations (cont.)

2. Multiplicativity of successive scalings


P ' ' ! S ( s x 2 , s y 2 )[ S ( s x1 , s y1 ) P ] ! [ S ( s x 2 , s y 2 ) S ( s x1 , s y1 )]P ! S 21 P

where
S 21 ! S ( s x 2 , s y 2 ) S ( s x1 , s y1 ) 0 sx 2 sy2 ! 0 0 0 s x 2 * s x1 ! 0 0 0 s x1 0 0 1 0 0 s y 2 * s y1 0 0 s y1 0 0 0 1 0 0 1

Composition of 2D Transformations (cont.)

3. Additivity of successive rotations


P ' ' ! R (U 2 )[ R(U1 ) P ] ! [ R (U 2 ) R (U1 )]P ! R21 P

where

R21 ! R (U 2 ) R (U1 ) 0 cos U1  sin U 2 cos U 2 cos U 2 0 sin U1 ! sin U 2 0 0 1 0 cos(U 2  U1 )  sin(U 2  U1 ) cos(U 2  U1 ) ! sin(U 2  U1 ) 0 0  sin U1 cos U1 0 0 0 1 0 0 1

Composition of 2D Transformations (cont.)

4. Different types of elementary transformations discussed above can be concatenated as well. P ' ! R (U )[T (d x , d y ) P ]
! [ R (U )T (d x , d y )]P ! MP

M ! R(U )T (d x , d y )

where

Consider the following two questions: 1) translate a line segment P1 P2, say, by -1 units in the x direction and -2 units in the y direction.
P2(3,3) P1(1,2) P1 P2

2). Rotate a line segment P1 P2, say by U degrees counter clockwise,


about P1.
P2(3,3) P1(1,2) P1 P2 U

Other Than Point Transformations

Translate Lines:

translate both endpoints, then join them.

Scale or Rotate Lines: More complex. For example, consider to rotate an arbitrary line about a point P1, three steps are needed: 1). Translate such that P1 is at the origin;
2). Rotate; 3). Translate such that the point at the origin returns to P1.
P2 R(U) P2(2,1) P2 U P1 T(1,2) P1

P2(3,3) T(-1,-2) P1(1,2) P1

Another Example.

Translate

Translate

Scale

Rotate

Order Matters!
As we said, the order for composition of 2D geometrical transformations matters, because, in general, matrix multiplication is not commutative. However, it is easy to show that, in the following four cases, commutativity holds: 1). Translation + Translation 2). Scaling + Scaling 3). Rotation + Rotation 4). Scaling (with sx = sy) + Rotation just to verify case 4:

M 2 ! R (U ) S ( s x , s y )

M 1 ! S ( s x , s y ) R (U )  sin U 0 0 cos U sx sin U 0 sy ! 0 cos U 0 0 1 0 0  s x * sin U 0 s x * cos U s * sin U s y * cos U ! y 0 if s 0 s , M = M0 1 x = y 1 2.

 sin U 0 s x cos U sin U cos U 0 0 0! 0 1 0 0 0  s y * sin U s * cos U 1 x ! s * sin U s y * cos U x 0 0

0 sy 0 0 0 1

0 0 1

Rigid-Body vs. Affine Transformations

A transformation matrix of the form


r11 r 21 0 r12 r22 0 tx ty 1

where the upper 2v2 sub-matrix is orthogonal, preserves angles and lengths. Such transforms are called rigid-body transformations, because the body or object being transformed is not distorted in any way. An arbitrary sequence of rotation and translation matrices creates a matrix of this form. The product of an arbitrary sequence of rotation, translations, and scale matrices will cause an affine transformation, which have the property of preserving parallelism of lines, but not of lengths and angles.

Rigid-Body vs. Affine Transformations (cont.)

Rigid- body Transformation

Affine Transformation

Unit cube

45

Scale in x, not in y

Shear transformation is also affine.

Shear in the x direction

Shear in the y direction

1 SH x ! 0 0

a 1 0

0 0 1

1 SH y ! b 0

0 1 0

0 0 1

2D Output Primitives
        

Points Lines Circles Ellipses Other curves Filling areas Text Patterns Polymarkers

Filling area
Polygons are considered!
1) 2)

Scan-Line Filling (between edges) Interactive Filling (using an interior starting point)

1) Scan-Line Filling (scan conversion)


Problem: Given the vertices or edges of a polygon, which are the pixels to be included in the area filling?

Scan-Line filling, contd


Main idea:  locate the intersections between the scan-lines and the edges of the polygon  sort the intersection points on each scan-line on increasing x-coordinates  generate frame-buffer positions along the current scan-line between pairwise intersection points

Main idea

Scan-Line filling, contd


Problems with intersection points that are vertices: Basic rule: count them as if each vertex is being two points (one to each of the two joining edges in the vertex) Exception: if the two edges joining in the vertex are on opposite sides of the scan-line, then count the vertex only once (require some additional processing)

Vertex problem

Scan-Line filling, contd


Time-consuming to locate the intersection points! If an edge is crossed by a scan-line, most probably also the next scan-line will cross it (the use of coherence properties)

Scan-Line filling, contd


Each edge is well described by an edgerecord:  ymax  x0 (initially the x related to ymin)  (x/(y (inverse of the slope)  (possibly also (x and (y) (x/(y is used for incremental calculations of the intersection points

Edge Records

Scan-Line filling, contd


The intersection point (xn,yn) between an edge and scan-line yn follows from the line equation of the edge: yn = (y/(x . xn + b (cp. y = m.x + b) The intersection between the same edge and the next scan-line yn+1 is then given from the following: yn+1 = (y/(x . xn+1 + b and also yn+1 = yn + 1 = (y/(x . xn + b +1

Scan-Line filling, contd


This gives us: xn+1 = xn + (x/(y , n = 0, 1, 2, .. i.e. the new value of x on the next scanline is given by adding the inverse of the slope to the current value of x

Scan-Line filling, contd


An active list of edge records intersecting with the current scan-line is sorted on increasing x-coordinates The polygon pixels that are written in the frame buffer are those which are calculated to be on the current scan-line between pairwise x-coordinates according to the active list

Scan-Line filling, contd


When changing from one scan-line to the next, the active edge list is updated:  a record with ymax < the next scan-line is removed  in the remaining records, x0 is incremented and rounded to the nearest integer  an edge with ymin = the next scan-line is included in the list

Scan-Line Filling Example

2) Interactive Filling
Given the boundaries of a closed surface. By choosing an arbitrary interior point, the complete interior of the surface will be filled with the color of the users choice.

Interactive Filling, contd


Definition: An area or a boundary is said to be 4connected if from an arbitrary point all other pixels within the area or on the boundary can be reached by only moving in horizontal or vertical steps. Furthermore, if it is also allowed to take diagonal steps, the surface or the boundary is said to be 8-connected.

4/8-connected

Interactive Filling, contd


A recursive procedure for filling a 4connected (8-connected) surface can easily be defined. Assume that the surface shall have the same color as the boundary (can easily be modified!). The first interior position (pixel) is choosen by the user.

Interactive Filling, algorithm


void fill(int x, int y, int fillColor) { int interiorColor; interiorColor = getPixel(x,y); if (interiorColor != fillColor) { setPixel(x, y, fillColor); fill(x + 1, y, fillColor); fill(x, y + 1, fillColor); fill(x - 1, y, fillColor); fill(x, y - 1, fillColor); } }

Inside-Outside test
When is a point an interior point? Odd-Even Rule Draw conceptually a line from a specified point to a distant point outside the coordinate space; count the number of polygon edges that are crossed if odd => interior if even => exterior Note! The vertices!

Text
Representation: * bitmapped (raster)
+ fast - more storage - less good for styles/sizes * outlined (lines and

curves)
+ less storage + good for styles/sizes - slower

Other output primitives


* pattern (to fill an area) normally, an n x m rectangular color pixel array with a specified reference point * polymarker (marker symbol) a character representing a point * (polyline) a connected sequence of line segments

Attributes
Influence the way a primitive is displayed Two main ways of introducing attributes: 1) added to primitives parameter list
e.g. setPixel(x, y, color)
2)

a list of current attributes (to be updated when changed) e.g setColor(color); setPixel(x, y);

Attributes for lines


Lines (and curves) are normally infinitely thin  type

dashed, dotted, solid, dot-dashed, pixel mask, e.g. 11100110011 problem with the joins; line caps are used to adjust shape

width pen/brush shape color (intensity)

 

Lines with width




Line caps

Joins

Attributes for area fill




fill style

hollow, solid, pattern, hatch fill,


 

color pattern

tiling

Tiling
Tiling = filling surfaces (polygons) with a rectangular pattern

Attributes for characters/strings


       

style font (typeface) color size (width/height) orientation path spacing alignment

Text attributes

Text attributes, contd

Text attributes, contd

Text attributes, contd

Color as attribute
Each color has a numerical value, or intensity, based on some color model. A color model typically consists of three primary colors, in the case of displays Red, Green and Blue (RGB) For each primary color an intensity can be given, either 0-255 (integer) or 0-1 (float) yielding the final color 256 different levels of each primary color means 3x8=24 bits of information to store

Color representations
Two different ways of storing a color value: 1) a direct color value storage/pixel 2) indirectly via a color look-up table index/pixel (typically 256 or 512 different colors in the table)

Color Look-up Table

Antialiasing
Aliasing the fact that exact points are approximated by fixed pixel positions Antialiasing = a technique that compensates for this (more than one intensity level/pixel is required)

Antialiasing, a method
A polygon will be studied (as an example). Area sampling (prefiltering): a pixel that is only partly included in the exact polygon, will be given an intensity that is proportional to the extent of the pixel area that is covered by the true polygon

Area sampling
P = polygon intensity B = background intensity f = the extent of the pixel area covered by the true polygon pixel intensity = P*f + B*(1 - f) Note! Time consuming to calculate f

Topics
 

Clipping Cohen-Sutherland Line Clipping Algorithm

Clipping


Why clipping?

Not everything defined in the world


Model Clipping

coordinates is inside the world window

Where does clipping take place?


Viewport Transformation

OpenGL does it for you

BUT, as a CS major, you should know how it


is done.

Line Clipping


int clipSegment(p1, p2, window) Input parameters: p1, p2, window p1, p2: 2D endpoints that define a line window: aligned rectangle Returned value: 1, if part of the line is inside the window 0, otherwise Output parameters: p1, p2 p1 and/or p2s value might be changed so that both p1 and p2 are inside the window

Line Clipping


Example

Line
AB BC CD DE EA

RetVal Output
o P3 o P2

o P4

o P1

o P4 o P3 o P2

o P1

Cohen-Sutherland Line Clipping Algorithm




Trivial accept and trivial reject

If both endpoints within window trivial accept If both endpoints outside of same boundary of
window trivial reject


Otherwise

Clip against each edge in turn


Throw away clipped off part of line each time


How can we do it efficiently (elegantly)?

Cohen-Sutherland Line Clipping Algorithm




Examples:

L4 window L2 L3 L1 L5

trivial accept? trivial reject?

L6

Cohen-Sutherland Line Clipping Algorithm




Use region outcode

Cohen-Sutherland Line Clipping Algorithm




outcode[1] n (x < Window.left) outcode[2] n (y > Window.top) outcode[3] n (x > Window.right) outcode[4] n (y < Window.bottom)

Cohen-Sutherland Line Clipping Algorithm




Both outcodes are FFFF

Trivial accept


Logical AND of two outcodes { FFFF

Trivial reject


Logical AND of two outcodes = FFFF

Cant tell Clip against each edge in turn


Throw away clipped off part of line each time

Cohen-Sutherland Line Clipping Algorithm




Examples:

L4 window L2 L3 L1 L5

outcodes? trivial accept? trivial reject?

L6

Cohen-Sutherland Line Clipping Algorithm


int clipSegment(Point2& p1, Point2& p2, RealRect W) do if(trivial accept) return 1; else if(trivial reject) return 0; else if(p1 is inside) swap(p1, p2) if(p1 is to the left) chop against the left else if(p1 is to the right) chop against the right else if(p1 is below) chop against the bottom else if(p1 is above) chop against the top while(1);

Cohen-Sutherland Line Clipping Algorithm




A segment that requires 4 clips

Cohen-Sutherland Line Clipping Algorithm




How do we chop against each boundary? p Given P1 (outside) and P2, (A.x,A.y)=?

Cohen-Sutherland Line Clipping Algorithm




Let dx = p1.x - p2.x dy = p1.y - p2.y A.x = w.r d = p1.y - A.y e = p1.x - w.r d/dy = e/dx

p1.y - A.y = (dy/dx)(p1.x - w.r) A.y = p1.y - (dy/dx)(p1.x - w.r)


= p1.y + (dy/dx)(w.r - p1.x) As A is the new P1
Q: Will we have divided-by-zero problem?

p1.y += (dy/dx)(w.r - p1.x)


p1.x = w.r

UNIT-II

THREEDIMENSIONAL CONCEPTS

3D

VIEWING

3D Viewing-contents

Viewing pipeline Viewing coordinates Projections View volumes and general projection
transformations clipping

3D Viewing
 

 

World coordinate system(where the objects are modeled and defined) Viewing coordinate system(viewing objects with respect to another user defined coordinate system) Scene coordinate system(a viewing coordinate system chosen to be at the centre of a scene) Object coordinate system(a coordinate system specific to an object.)

3D viewing


Simple camera analogy is adopted

3D viewing-pipeline

3D viewing


Defining the viewing coordinate system and specifying the view plane

3D viewing
steps to establish a Viewing coordinate system or view reference coordinate system and the view plane

First pick up a world coordinate position called the view reference point. This is the origin of the VC system Pick up the +ve direction for the Zv axis and the orientation of the view plane by specifying the view plane normal vector N. Choose a world coordinate position and this point establishes the direction for N relative to either the world or VC origin. The view plane normal vector is the directed line segment.

3D viewing

steps to establish a Viewing coordinate system or view reference coordinate system and the view plane

Some packages allow us to choose a look at point relative to the view reference point. Or set up a Left handed viewing system and take the N and the +ve Zv axis from the viewing origin to the look- at point.

3D viewing

steps to establish a Viewing coordinate system or view reference coordinate system and the view plane

We now choose the view up vector V. It can be specified as a twist angle 5 about Zv axis. Using N,V U can be specified. Generally graphics packages allow users to choose a position of the view plane along the Zv axis by specifying the view plane distance from the viewing origin. plane. The view plane is always parallel to the XvYv

3D viewing

To obtain a series of views of a scene we can keep the view reference point fixed and change the direction of N or we can fix N direction and move the view reference point around the scene.

Transformation from world to viewing coordinate system

3D viewing
yw xv yv zv
xw yw

xv

yv zv xw

Mwc,vc=RzRyRx.T

zw

(a) Invert Viewing z Axis

zw

(b) Translate Viewing Origin to World Origin

yw

yw

yw

xv yv zw

yv

yv

xw zv
zv zw

xv

xw
zw

zv

xv

xw

(c) Rotate About World x Axis to Bring Viewing z Axis into the xz Plane of the World System

(d) Rotate About the World y Axis to Align the Two z Axes

(e) Rotate About the World z Axis to Align the Two Viewing Systems

What Are Projections?


Our 3-D scenes are all specified in 3-D world coordinates To display these we need to generate a 2-D image - project objects onto a picture plane

Picture Plane

Objects in World Space

Converting From 3-D To 2-D




Projection is just one part of the process of converting from 3-D world coordinates to a 2-D image

3-D world coordinate output primitives

Clip against view volume

Project onto projection plane

Transform to 2-D device coordinates

2-D device coordinates

Types Of Projections


There are two broad classes of projection:


Parallel: Typically used for architectural and engineering drawings Perspective: Realistic looking and used in computer graphics

Parallel Projection

Perspective Projection

Taxonomy Of Projections

Types Of Projections


There are two broad classes of projection: Parallel:


preserves relative proportions of objects accurate views of various sides of an object can be obtained does not give realistic representations of the appearance of a 3D objective. Perspective: produce realistic views but does not preserve relative proportions projections of distant objects are smaller than the projections of objects of the same size that are closer to the projection plane.

Parallel Projections


Some examples of parallel projections


Orthographic oblique

Orthographic Projection(axonometric)

Parallel Projections


Some examples of parallel projections

The projection plane is aligned so that it intersects each coordinate axes in which the object is defined (principal axes) at the same distance from the origin. All the principal axes are foreshortened equally.

Isometric projection for a cube

Parallel Projections
Transformation equations for an orthographic parallel projections is simple Any point (x,y,z) in viewing coordinates is transformed to projection coordinates as Xp=X Yp=Y

Parallel Projections

Oblique projections

Transformation equations for oblique projections is as below.

x p 1 y p ! 0 z p 0 w p 0

0 1 0 0

L1 cos J L1 sin J 0 0

0 x 0 y 0 z 1 1

Parallel Projections
x p 1 y p ! 0 z p 0 w p 0 0 L1 cos J 1 L1 sin J 0 0 0 0 0 x 0 y 0 z 1 1

Oblique projections

Transformation equations for oblique projections is as below. An orthographic projection is obtained when L1=0. In fact the effect of the projection matrix is to shear planes of constant Z and project them on to the view plane.

Two common oblique parallel projections:


Cavalier and Cabinet

Parallel Projections
2 common oblique parallel projections: Cavalier projection
tan E ! 1 E ! 45r

Oblique projections

Cabinet projection
tan E ! 2 E ! 63.4r

All lines perpendicular to the projection plane are projected with no change in length.

They are more realistic than cavaliar Lines perpendicular to the viewing surface are projected at one-half their length.

Perspective Projections
visual effect is similar to human visual system... has 'perspective foreshortening size of object varies inversely with distance from the center of projection. angles only remain intact for faces parallel to projection plane.

Perspective Projections

Where u varies from o to 1

Perspective Projections

Perspective Projections
If the PRP is selected at the viewing cooridinate origin then Zprp=0 The projection coordinates become Xp=X(Zvp/Z) If the view plane is the UV plane itself then Zvp=0. The projection coordinates become Xp=X(Zprp/(Zprp-Z))=X(1/(1-Z/Zprp)) Yp=Y(Zprp/(Zprp-Z))=Y(1/(1-Z/Zprp)) Yp=Y(Zvp/Z)

Perspective Projections
 

There are a number of different kinds of perspective views The most common are one-point and two point perspectives

Coordinate description One-point perspective projection

Two-point perspective projection

Perspective Projections
Parallel lines that are parallel to the view plane are projected as parallel lines. The point at which a set of projected parallel lines appear to converge is called a vanishing point. If a set of lines are parallel to one of the three principle axes, the vanishing point is called an principal vanishing point. There are at most 3 such points, corresponding to the number of axes cut by the projection plane.

View volume

View volume

Parallel projection The size of the view volume depends on the size of the window but the shape depends on the type of projection to be used. Both near and far planes must be on the same side of the reference point.

Perspective projection

View volume

Often the view plane is positioned at the view reference point or on the front clipping plane while generating parallel projection. Perspective effects depend on the positioning of the projection reference point relative to the view plane

View volume - PHIGS


View Plane Front Clipping Plane

VPN

VRP

Back Clipping Plane

Direction of Propagation

B F
View Plane Back Clipping Plane

Front Clipping Plane

View Plane

Back Clipping Plane Direction of Propagation

Front Clipping Plane

VPN

VRP

VPN

VRP

View volume
In an animation sequence, we can place the projection reference point at the viewing coordinate origin and put the view plane in front of the scene. We set the field of view by adjusting the size of the window relative to the distance of the view plane from the PRP. We move through the scene by moving the viewing reference frame and the PRP will move with the view reference point.

General parallel projection transformation  Parallel


Far Plane Near Plane View Volume
N

Direction of Projection

Window (a) Original Orientation

Far Plane Near Plane

View Volume
N

Direction of Projection Window

(b) After Shearing

General parallel projection transformation  Parallel


Let Vp=(a,b,c) be the projection vector in viewing coordinates. The shear transformation can be expressed as Vp=Mparallel.Vp Where Mparallel is For an orthographic parallel projection Mparallel becomes the identity matrix since a1=b1=0 Shearing
: ( a , b, c )
1 0 @ 0 0 0 a1

: (0,0, c)

0 a 0 1 b1 0 b 0 ! 0 1 0 c c 0 0 1 0 0 a @a  c a1 ! 0 a1 !  c b b  c b1 ! 0 b1 !  c

General perspective projection transformation Regularization of Clipping


(View) Volume
Far View Volume Near
N N

(Cont)
View Volume

Perspective
Far

Near

Window

Window

Center of Projection

Center of Projection (b) After Transformation

(a) Original Orientation

Shearing

General perspective projection  Perspective transformation


Steps 1. Shear the view volume so that the centerline of the frustum is perpendicular to the view plane 2. Scale the view volume with a scaling factor that depends on 1/z. A shear operation is to align a general perspective view volume with the projection window. The transformation involves a combination of z-axis shear and a translation. Mperspective=Mscale.Mshear

Clipping
View volume clipping boundaries are planes whose orientations depend on the type of projection, the projection window and the position of the projection reference point The process of finding the intersection of a line with one of the view volume boundaries is simplified if we convert the view volume before clipping to a rectangular parallelepiped. i.e we first perform the projection transformation which converts coordinate values in the view volume to orthographic parallel coordinates. Oblique projection view volumes are converted to a rectangular parallelepiped by the shearing operation and perspective view volumes are converted with a combination of shear and scale transformations.

Clipping-normalized view volumes


The normalized view volume is a region defined by the planes X=0, x=1, y=0, y=1, z=0, z=1

Clipping-normalized view volumes


There are several advantages to clipping against the unit cube 1. The normalized view volume provides a standard shape for representing any sized view volume. 2. Clipping procedures are simplified and standardized with unit clipping planes or the viewport planes. 3. Depth cueing and visible-surface determination are simplified, since Z-axis always points towards the viewer.

Mapping positions within a rectangular view volume to a three-dimensional rectangular viewport is accomplished with a combination of scaling and translation.

Unit cube

3D viewport

Clipping-normalized view volumes


Mapping positions within a rectangular view volume to a three-dimensional rectangular viewport is accomplished with a combination of scaling and translation. Dx 0
Where

Unit cube

0 Dy 0 0

0 0 Dz 0

Kx Ky Kz 1
3D viewport

Dx=(xvmax-xvmin)/(xwmax-xwmin) and Kx= xvmin- xwmin Dx Dy= (yvmax-yvmin)/(ywmax-ywmin) and Ky= yvmin - ywmin Dy Dz= (zvmax-zvmin)/(zwmax-zwmin) and Kz= zvmin- zwmin Dz

Viewport clipping
For a line endpoint at position (x,y,z) we assign the bit positions in the region code from right to left as Bit 1 = 1 if x< xvmin (left) Bit 1 = 1 if x< xvmax (right) Bit 1 = 1 if y< yvmin (below) Bit 1 = 1 if y< yvmax (above) Bit 1 = 1 if z< zvmin (front) Bit 1 = 1 if z< zvmax (back)

Viewport clipping
For a line segment with endpoints P1(x1,y1,z1) and P2(x2,y2,z2) the parametric equations can be X=x1+(x2-x1)u Y=y1+(y2-y1)u Z=z1+(z2-z1)u

Hardware implementations
Transformation Operations
WORLD-COORDINATE Object descriptions

Clipping Operations

Conversion to Device Coordinates

3D Transformations

2D coordinates
y

3D coordinates
y

x z x y Right-handed coordinate system: z x

3D Transformations (cont.)
1. Translation in 3D is a simple extension from that in 2D:
1 0 T (d x , d y , d z ) ! 0 0 0 0 dx 1 0 dy 0 1 dz 0 0 1

2.

Scaling is similarly extended:


sx 0 S (sx , s y , sz ) ! 0 0 0 sy 0 0 0 0 sz 0 0 0 0 1

3D Transformations (cont.)
3. The 2D rotation introduced previously is just a 3D rotation about the z axis.

cos U sin U Rz (U ) ! 0 0 similarly we have: 0 1 0 cos U Rx (U ) ! 0 sin U 0 0

 sin U cosU 0 0

0 0 0 0 1 0 0 1
Z

0  sin U cos U 0

0 0 0 1

cos U 0 Ry (U ) !  sin U 0

0 sin U 1 0 0 cos U 0 0

0 0 0 1

Composition of 3D Rotations
In 3D transformations, the order of a sequence of rotations matters!
cos E sin E Rz (E ) R y ( F ) ! 0 0 cos F 0 R y ( F ) Rz (E ) !  sin F 0  sin E cos E 0 0 0 sin F 1 0 0 0 0 cos F 0 0 cos F 0 0 0 1 0  sin F 0 1 0 0 cos E 0 sin E 0 0 1 0 0 sin F 1 0  sin E cos E 0 0 0 0 0 cos F 0 cos E cos F 0 sin E cos F ! 0  sin F 0 1  sin E cos E 0 0  sin E cos F cos E sin F sin E 0 cos E sin F sin E sin F cos F 0 sin F 0 cos F 0 0 0 0 1 0 0 0 1

0 0 cos E cos F 0 0 sin E ! 1 0  sin F cos E 0 0 1

R y ( F ) Rz (E ) { Rz (E ) R y ( F )

More Rotations
We have shown how to rotate about one of the principle axes, i.e. the axes constituting the coordinate system. There are more we can do, for example, to perform a rotation about an arbitrary axis:

We want to rotate an object about an axis in space passing through (x1, y1, z1) and (x2, y2, z2).
Z

Y P2(x2, y2 , z2)

P1(x1, y1 , z1)

Rotating About An Arbitrary Axis


Y P2 Y

P1 P1 Z X Z

E P2 X

1). Translate the object by (-x1, y1, -z1): T(-x1, -y1, -z1) Y

2). Rotate the axis about x so that it lies on the xz plane: Rx(E) Y

P1 Z F P2 X Z

P1 P2 X

3). Rotate the axis about y so that it lies on z: Ry (F)

4). Rotate object about z by U: Rz(U)

Rotating About An Arbitrary Axis (cont.)


After all the efforts, dont forget to undo the rotations and the translation! Therefore, the mixed matrix that will perform the required task of rotating an object about an arbitrary axis is given by: M = T(x1,y1,z1) Rx(-E)Ry(-F) Rz(U) Ry(F) Rx(E)T(-x1,-y1,-z1)

Finding F is trivial, but what about E? The angle between the z axis and the projection of P1 P2 on yz plane is E.
E

P2

P1 Z X

Composite 3D Transformations

Example of Composite 3D Transformations


Try to transform the line segments P1P2 and P1P3 from their start position in (a) to their ending position in (b).
y P3 P1 P2 x z z P2 P3 P1 y

(a)

(b)

The first solution is to compose the primitive transformations T, Rx, Ry, and Rz. This approach is easier to illustrate and does offer help on building an understanding. The 2nd, more abstract approach is to use the properties of special orthogonal matrices.

Composition of 3D Transformations
Breaking a difficult problem into simpler sub-problems: 1.Translate P1 to the origin. 2. Rotate about the y axis such that P1P2 lies in the (y, z) plane. 3. Rotate about the x axis such that P1P2 lies on the z axis. 4. Rotate about the z axis such that P1P3 lies in the (y, z) plane.
y P3 P1 P2 x z y 4 P3 P1 z P2 x P3 P2 z P1 x y 3 z 2 y 1 P1 P3 P2 x y

P3 P2 z P1 x

Composition of 3D Transformations
T (  x1 , y1 , z1 ) 1 0 ! 0 0 0 1 0 0
y P d3 Pd1
D1 U

1.

0 0 1 0

 x1  y1  z1 1

P1' ! T ( x1 , y1 , z1 ) y P ! [0 0 0 1]T 1 P2' ! T ( x1 , y1 , z1 ) y P2 ! [ x2  x1 P3' ! T ( x1 , y1 , z1 ) y P3 ! [ x3  x1 y2  y1 y3  y1 z 2  z1 1]T z3  z1 1]T

2.

R y ( (90  U ) Q ) sin U 0 ! cos U 0


x

P d2

0 1 0 0

 cos U 0 sin U 0

0 0 0 1

Composition of 3D Transformations
Rx (J ) 1 0 ! 0 0
x y

Pdd2 z

D2 J

Pdd1

0 cos J sin J 0

0  sin J cos J 0

0 0 0 1

4.

Rz (E )
Pddd3
E

D3

Finally, we have the composite matrix:


x

Pddd1 z Pddd2

Rz (E ) Rx (J ) R y (U  90) T ( x1 , y1 , z1 ) ! R T

Vector Rotation
y
Rotate the vector

y u U x x

1 0
The unit vector along the x axis is [1, 0]T. After rotating about the origin by U, the resulting vector is

cos U u! sin U

 sin U 1 cos U 0 ! sin U cos U

Vector Rotation (cont.)


y y

0 1
x

Rotate the vector

U x

Similarly, the unit vector along the y axis is [0, 1]T. After rotating about the origin by U, the resulting vector is

cos U v! sin U

 sin U 0  sin U 1 ! cos U cos U

The above results states that if we try to rotate a vector, originally pointing the direction of the x (or y) axis, toward a new direction, u (or v), the rotation matrix, R, could be simply written as [u | v] without the need of any explicit knowledge of U, the actual rotation angle.

Vector Rotation (cont.)


The reversed operation of the above rotation is to rotate a vector that is not originally pointing the x (or y) direction into the direction of the positive x or y axis. The rotation matrix in this case is R(-U ), expressed by R-1(U ) y u
Rotate the vector

y u U x x

cos(U )  sin( U ) cos U 1 R (U ) ! !  sin U sin( U ) cos(U )


where T denotes the transpose.

sin U u T ! T ! R T (U ) cos U v

Example
what is the rotation matrix if one wants the vector T in the left figure to be rotated to the direction of u.

(2, 3)

?2 3A ! 2 u ! |u | 2 2  32 13
T

3 13

R ! ? | v A! u

2 13 3 13

3 13 2 13

If, on the other hand, one wants the vector u to be rotated to the direction of the positive x axis, the rotation matrix should be
T u R! T! v  2 13 3 13 3 13 2 13

Rotation Matrices
Rotation matrix is orthonormal: Each row is a unit vector
cos U R! sin U  sin U cos U cos 2 U  (  sin U ) 2 ! 1 cos 2 U  sin 2 U ! 1

Each row is perpendicular to the other, i.e. their dot product is zero.
cos U v sin U  cos U v (  sin U ) ! 0

Each vector will be rotated by R(U) to lie on the positive x and y axes, respectively. The two column vectors are those into which vectors along the positive x and y axes are rotated. For orthonormal matrices, we have
R 1 (U ) ! RT (U )

Cross Product
The cross product or vector product of two vectors, v1 and v2, is another vector:

y1 z 2  y2 z1 v1 v v2 !  ( x1 z 2  x2 z1 ) x1 y2  x2 y1
The cross product of two vectors is orthogonal to both Right-hand rule dictates direction of cross product. v1v v2 v2 v1

Extension to 3D Cases
The above examples can be extended to 3D cases.

In 2D, we need to know u, which will be rotated to the direction of the positive x axis. In 3D, however, we need to know more than one vector. See in the left figure, for example, two vectors, u1 and u2 are given. If after

u1

u2 x

rotation, u1 is aligned to the positive z axis, this will only give us the third column in the rotation matrix. What about the other two columns?

v=u1vu2

3D Rotation
In many cases in 3D, only one vector will be aligned to one of the coordinate axes, and the others are often not explicitly given. Lets see the example: y Note, in this example, vector P1P2 will be
P3 P1 z y P2 x

rotated to the positive z direction. Hence the fist column vector in the rotation matrix is the normalised P1P2. But what about the other two columns? After all, P1P3 is not perpendicular to P1P2. Well, we can find it by taking the cross product of P1P2 and P1P3. Since P1P2 v P1P3 is perpendicular to both P1P2

P3 P1 z P2

and P1P3, it will be aligned into the direction of the positive x axis.

3D Rotation (cont.)
y v P3 P1 z u P2 x

And the third direction is decide by the cross product of the other two directions, which is P1P2 v(P1P2 vP1P2 ). Therefore, the rotation matrix should be

y v w u P1 z P3 P2 x

P P2 v P P3 1 1 P P2 v P P3 1 1 P P2 v (P P2 v P P3 ) 1 1 R! 1 P P2 v ( P P2 v P P3 ) 1 1 1 P P2 1 P P2 1

Yaw, Pitch, and Roll


Imagine three lines running through an airplane and intersecting at right angles at the airplanes centre of gravity.
Roll: rotation around the front-to-back axis.

Roll: rotation around the side-to-side axis.

Roll: rotation around the vertical axis.

An Example of the Airplane


Consider the following example. An airplane is oriented such that its nose is pointing in the positive z direction, its right wing is pointing in the positive x direction, its cockpit is pointing in the positive y direction. We want to transform the airplane so that it heads in the direction given by the vector DOF (direction of flight), is centre at P, and is not banked.

Solution to the Airplane Example


First we are to rotate the positive zp direction into the direction of DOF, which gives us the third column of the rotation matrix: DOF / |DOF|. The xp axis must be transformed into a horizontal vector perpendicular to DOF that is in the direction of yvDOF. The yp direction is then given by xp v zp = DOF v(y v DOF).

y v DOF R! y v DOF

DOF v ( y v DOF ) DOF v ( y v DOF )

DOF DOF

Inverses of (2D and) 3D Transformations


1. Translation:

T 1 ( d x , d y , d z ) ! T (  d x ,  d y ,  d z )

2. Scaling:

S 1 ( s x , s y , s z ) ! S (

1 1 1 , , ) sx s y sz

3. Rotation:

R 1 (U ) ! R (U ) ! R T (U )

4. Shear:

SH 1 ( shx , shy ) ! SH ( shx , shy )

UNIT-III

GRAPHICS PROGRAMMING

Color Models

Color models,contd
Different meanings of color:  painting  wavelength of visible light  human eye perception

Physical properties of light


Visible light is part of the electromagnetic radiation (380750 nm) 1 nm (nanometer) = 10-10 m (=10-7 cm) 1 (angstrom) = 10 nm Radiation can be expressed in wavelength (P) or frequency (f), c=Pf, where c=3.1010 cm/sec

Physical properties of light


White light consists of a spectrum of all visible colors

Physical properties of light


All kinds of light can be described by the energy of each wavelength The distribution showing the relation between energy and wavelength (or frequency) is called energy spectrum

Physical properties of light

This distribution may indicate: 1) a dominant wavelength (or frequency) which is the color of the light (hue), cp. ED 2) brightness (luminance), intensity of the light (value), cp. the area A 3) purity (saturation), cp. ED - EW

Physical properties of light


Energy spectrum for a light source with a dominant frequency near the red color

Material properties
The color of an object depends on the so called spectral curves for transparency and reflection of the material The spectral curves describe how light of different wavelengths are refracted and reflected (cp. the material coefficients introduced in the illumination models)

Properties of reflected light


Incident white light upon an object is for some wavelengths absorbed, for others reflected E.g. if all light is absorbed => black If all wavelengths but one are absorbed => the one color is observed as the color of the object by the reflection

Color definitions
Complementary colors - two colors combine to produce white light Primary colors - (two or) three colors used for describing other colors Two main principles for mixing colors:  additive mixing  subtractive mixing

Additive mixing


 

pure colors are put close to each other => a mix on the retina of the human eye (cp. RGB) overlapping gives yellow, cyan, magenta and white the typical technique on color displays

Subtractive mixing


color pigments are mixed directly in some liquid, e.g. ink each color in the mixture absorbs its specific part of the incident light the color of the mixture is determined by subtraction of colored light, e.g. yellow absorbs blue => only red and green, i.e. yellow, will reach the eye (yellow because of addition)

Subtractive mixing,contd


 

primary colors: cyan, magenta and yellow, i.e. CMY the typical technique in printers/plotters connection between additive and subtractive primary colors (cp. the color models RGB and CMY)

Additive/subtractive mixing

Human color seeing


The retina of the human eye consists of cones (7-8M),tappar, and rods (100-120M), stavar, which are connected with nerve fibres to the brain

Human color seeing,contd


Theory: the cones consist of various light absorbing material The light sensitivity of the cones and rods varies with the wavelength, and between persons The sum of  the energy spectrum of the light  the reflection spectrum of the object  the response spectrum of the eye decides the color perception for a person

Overview of color models


The human eye can perceive about 382000(!) different colors Necessary with some kind of classification system; all using three coordinates as a basis: 1) CIE standard 2) RGB color model 3) CMY color model (also, CMYK) 4) HSV color model 5) HLS color model

CIE standard
Commission Internationale de LEclairage (1931)  not a computer model  each color = a weighted sum of three imaginary primary colors

RGB model


all colors are generated from the three primaries various colors are obtained by changing the amount of each primary additive mixing (r,g,b), 0r,g,b1

RGB model,contd
 

the RGB cube 1 bit/primary => 8 colors, 8 bits/primary => 16M colors

CMY model


 

cyan, magenta and yellow are complementary colors of red,green and blue, respectively subtractive mixing the typical printer technique

CMY model,contd


almost the same cube as with RGB; only black<-> white the various colors are obtained by reducing light, e.g. if red is absorbed => green and blue are added, i.e cyan

RGB vs CMY
If the intensities are represented as 0r,g,b1 and 0c,m,y1 (also coordinates 0-255 can be used), then the relation between RGB and CMY can be described as:

c 1 r m ! 1  g  y   1  b

CMYK model
For printing and graphics art industry, CMY is not enough; a fourth primary, K which stands for black, is added. Conversions between RGB and CMYK are possible, although they require some extra processing.

HSV model
 

HSV stands for Hue-Saturation-Value described by a hexcone derived from the RGB cube

HSV model,contd


Hue (0-360); the color, cp. the dominant wavelength (128) Saturation (0-1); the amount of white (130) Value (0-1); the amount of black (23)

HSV model,contd
The numbers given after each primary are estimates of how many levels a human being is capable to distinguish between, which (in theory) gives the total number of color nuances: 128*130*23 = 382720 In Computer Graphics, usually enough with: 128*8*15 = 16384

HLS model
Another model similar to HSV L stands for Lightness

Color models
Some more facts about colors: The distance between two colors in the color cube is not a measure of how far apart the colors are perceptionally! Humans are more sensitive to shifts in blue (and green?) than, for instance, in yellow

 COMPUTER

ANIMATIONS

Computer Animations


Any time sequence of visual changes in a scene. Size, color, transparency, shape, surface texture, rotation, translation, scaling, lighting effects, morphing, changing camera parameters(position, orientation, and focal length), particle animation. Design of animation sequences: Storyboard layout Object definitions Key-frame specifications generation of in-between frames

    

Computer Animations
   

   

Frame by frame animation Each frame is separately generated. Object defintion Objects are defined interms of basic shapes, such as polygons or splines. In addition the associated movements for each object are specified along with the shape. Storyboard It is an outline of the action Keyframe Detailed drawing of the scene at a particular instance

Computer Animations
       

Inbetweens Intermediate frames (3 to 5 inbetweens for each two key frames) Motions can be generated using 2D or 3D transformation Object parameters are stored in database Rendering algorithms are used finally Raster animations: Uses raster operations. Ex: we can animate objects along 2D motion paths using the color table transformations. Here we predefine the object at successive positions along the motion path, and set the successive blocks of pixel values to color table entries

Computer Animations
 

Computer animation languages: A typical task in animation specification is Scene description includes position of objects and light sources, defining the photometric parameters and setting the camera parameters. Action specification this involves layout of motion paths for the objects and camera. We need viewing and perspective transformations, geometric transformations, visible surface detection, surface rendering, kinematics etc., Keyframe systems designed simply to generate the inbetweens from the user specified key frames.

Computer Animations
 

     

Computer animation languages: A typical task in animation specification is Parameterized systems allow object motion characteristics to be specified as part of the object definitions. The adjustable parameters control such object charateristics as degrees of freedom, motion limitations and allowable shape changes. Scripting systems allow object specifications and animation sequences to be defined with a user-input script. From the script a library of various objects and motions can be constructed.

Computer Animations
Interpolation techniques Linear

Computer Animations
Interpolation techniques Non-linear

Computer Animations
 

   

Key frame systems Morphing Transformation of object shapes from one form to another is called morphing. Given two keyframes for an object transformation, we first adjust the object specification in one of the frames so that number of polygon edges or vertices is the same for the two frames. 1 Let Lk,Lk+1 denote the number1 line segments in two different frames of K,K+1 3 Let us define Lmax=max(Lk,Lk+1) 2 2 Lmin=min(Lk,Lk+1) Key frame K Key frame K+1 Ne = Lmax mod Lmin Ns = int(Lmax / Lmin)

Computer Animations
   

Steps 1. Dividing Ne edges of keyframemin into Ns+1 sections 2. Dividing the remaining lines of keyframemin into Nssections
1 3 2 2
Key frame K+1

Key frame K

Computer Animations
          

Key frame systems Morphing Transformation of object shapes from one form to another is called morphing. If we equalize the vertex count, then the similar analysis follows Let Vk,Vk+1 denote the number of vertices in two different frames K,K+1 Let us define Vmax=max(Lk,Lk+1) Vmin=min(Lk,Lk+1) Nls = (Vmax 1)mod (Vmin 1) Np = int((Vmax 1) / (Vmin 1) Steps 1. adding Np points to Nls line sections of keyframemin sections 2. Adding Np-1 points to the remaining edges of keyframemin

Computer Animations
  

  

Simulating accelerations Curve fitting techniques are often used to specify the animation paths between keyframes. To simulate accelerations we can adjust the time spacing for the inbetweens. For constant speed we use equal interval time spacing for the inbetweens. suppose we want n in-betweens for keyframes at times t1 and t2. The time intervals between key frames is then divided into n+1 sub Ht intervals, yielding an in-between spacing of H t = t2-t1/(n+1) We can calculate the time for in-betweens as tBj=t1+j H t for j=1,2,.,n

Computer Animations

       

Simulating accelerations
To model increase or decrease in speeds we use trignometric functions. To model increasing speed, we want the time spacing between frames to increase so that greater changes in position occur as the object moves faster. We can obtain increase in interval size with the function 1-cosU, 0< U<4/2 For n-inbetweens the time for the jth inbetween would then be calculated as tBj=t1+Ht(1-cosj 4/2(n+1))

j=1,2,.,n
For j=1

tB1=t1+Ht(1-cos 4/2(n+1))


For j=1

 

tB2=t1+Ht(1-cos 24/2(n+1)) where Ht is the time difference between any two key frames.

Computer Animations

 

Simulating deccelerations
To model increase or decrease in speeds we use trignometric functions. To model decreasing speed, we want the time spacing between frames to decrease. We can obtain increase in interval size with the function sinU, 0< U<4/2 For n-inbetweens the time for the jth inbetween would then be calculated as tBj=t1+Ht.sinj 4/2(n+1)) j=1,2,.,n

     

Computer Animations
       

Simulating both accelerations and deccelerations To model increase or decrease in speeds we use trignometric functions. A combination of increasing and decreasing speeds can be modeled using (1-cosU) 0< U<4/2 The time for the jth inbetween is calculated as tBj=t1+Ht 1-cos j[4(n+1)/2) j=1,2,.,n

Computer Animations
  

     

Motion specifications Direct motion specifications Here we explicitly give the rotation angles and translation vectors. Then the geometric transformation matrices are applied to transform coordinate positions. A bouncing ball can be approximated by a sine curve y(x)=AI(sin([x+U0)Ie-kx A is the initial amplitude [ is the angular frequency U0 is the phase angle K is the damping constant

Computer Animations
  

Motion specifications Goal directed systems We can specify the motions that are to take place in general terms that abstractly describe the actions, because they determine specific motion paramters given the goals of the animation.

Computer Animations
  

Motion specifications Kinematics Kinematic specification of of a motion can also be given by simply describing the motion path which is often done using splines. In inverse kinematics we specify the intital and final positions of objects at specified times and the motion parameters are computed by the system.

Computer Animations
  

 

Motion specifications dynamics specification of the forces that produce the velocities and accelerations. Descriptions of object behavior under the influence of forces are generally referred to as a Physically based modeling (.rigid body systems and non rigid systems such as cloth or plastic) Ex: magnetic, gravitational, frictional etc We can also use inverse dynamics to obtain the forces, given the initial and final position of objects and the type of motion.

Computer Animations
Physics based animations Ideally suited for: Large volumes of objects wind effects, liquids, Cloth animation/draping Underlying mechanisms are usually: Particle systems Mass-spring systems

Computer Animations
Physics based animations

Computer Animations
Physics based animations

Computer Animations
Some more animation techniques.

Anticipation and Staging

Computer Animations
Some more animation techniques.

Secondary Motion

Computer Animations
Some more animation techniques.

Motion Capture

Computer Graphics using OpenGL


Initial Steps in Drawing Figures

Using Open-GL


Files: .h, .lib, .dll

The entire folder gl is placed in the Include


directory of Visual C++ The individual lib files are placed in the lib directory of Visual C++ The individual dll files are placed in C:\Windows\System32

Using Open-GL (2)




Includes:

<windows.h> <gl/gl.h> <gl/glu.h> <gl/glut.h> <gl/glui.h> (if used)


Include in order given. If you use capital letters for any file or directory, use them in your include statement also.

Using Open-GL (3)




Changing project settings: Visual C++ 6.0

Project menu, Settings entry In Object/library modules move to the end of


the line and add glui32.lib glut32.lib glu32.lib opengl32.lib (separated by spaces from last entry and each other) In Project Options, scroll down to end of box and add same set of .lib files Close Project menu and save workspace

Using Open-GL (3)




Changing Project Settings: Visual C++ .NET 2003

Project, Properties, Linker, Command Line In the white space at the bottom, add
glui32.lib glut32.lib glu32.lib opengl32.lib Close Project menu and save your solution

Getting Started Making Pictures




Graphics display: Entire screen (a); windows system (b); [both have usual screen coordinates, with y-axis down]; windows system [inverted coordinates] (c)

Basic System Drawing Commands




setPixel(x, y, color)

Pixel at location (x, y) gets color specified by




color Other names: putPixel(), SetPixel(), or drawPoint()

line(x1, y1, x2, y2)

Draws a line between (x1, y1) and (x2, y2) Other names: drawLine() or Line().

Alternative Basic Drawing




current position (cp), specifies where the system is drawing now. moveTo(x,y) moves the pen invisibly to the location (x, y) and then updates the current position to this position. lineTo(x,y) draws a straight line from the current position to (x, y) and then updates the cp to (x, y).

Example: A Square


   

moveTo(4, 4); //move to starting corner lineTo(-2, 4); lineTo(-2, -2); lineTo(4, -2); lineTo(4, 4); //close the square

Device Independent Graphics and OpenGL




Allows same graphics program to be run on many different machine types with nearly identical output.

.dll files must be with program




OpenGL is an API: it controls whatever hardware you are using, and you use its functions instead of controlling the hardware directly. OpenGL is open source (free).

Event-driven Programs


 

Respond to events, such as mouse click or move, key press, or window reshape or resize. System manages event queue. Programmer provides call-back functions to handle each event. Call-back functions must be registered with OpenGL to let it know which function handles which event. Registering function does *not* call it!

Skeleton Event-driven Program


// include OpenGL libraries void main() { glutDisplayFunc(myDisplay); // register the redraw function glutReshapeFunc(myReshape); // register the reshape function glutMouseFunc(myMouse); // register the mouse action function glutMotionFunc(myMotionFunc); // register the mouse motion function glutKeyboardFunc(myKeyboard); // register the keyboard action function perhaps initialize other things glutMainLoop(); // enter the unending main loop } all of the callback functions are defined here

Callback Functions


glutDisplayFunc(myDisplay);

(Re)draws screen when window opened or another window moved off it. Reports new window width and height for reshaped window. (Moving a window does not produce a reshape event.) when nothing else is going on, simply redraws display using void myIdle() {glutPostRedisplay();}

glutReshapeFunc(myReshape);

glutIdleFunc(myIdle);

Callback Functions (2)




glutMouseFunc(myMouse);

Handles mouse button presses. Knows


mouse location and nature of button (up or down and which button).


glutMotionFunc(myMotionFunc);

Handles case when the mouse is moved with


one or more mouse buttons pressed.

Callback Functions (3)




glutPassiveMotionFunc(myPassiveMotionFunc )

Handles case where mouse enters the window with


no buttons pressed.


glutKeyboardFunc(myKeyboardFunc);

Handles key presses and releases. Knows which


key was pressed and mouse location.


glutMainLoop()

Runs forever waiting for an event. When one occurs, it is handled by the appropriate callback function.

Libraries to Include
  

GL, for which the commands begin with GL; GLUT, the GL Utility Toolkit, opens windows, develops menus, and manages events. GLU, the GL Utility Library, which provides high level routines to handle complex mathematical and drawing operations. GLUI, the User Interface Library, which is completely integrated with the GLUT library.

The GLUT functions must be available for GLUI to operate properly. GLUI provides sophisticated controls and menus to OpenGL applications.

A GL Program to Open a Window


// appropriate #includes go here see Appendix 1 void main(int argc, char** argv) { glutInit(&argc, argv); // initialize the toolkit glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); // set the display mode glutInitWindowSize(640,480); // set window size glutInitWindowPosition(100, 150); // set window upper left corner position on screen glutCreateWindow("my first attempt"); // open the screen window (Title: my first attempt) // continued next slide

Part 2 of Window Program


// register the callback functions glutDisplayFunc(myDisplay); glutReshapeFunc(myReshape); glutMouseFunc(myMouse); glutKeyboardFunc(myKeyboard); myInit(); // additional initializations as necessary glutMainLoop(); // go into a perpetual loop }


Terminate program by closing window(s) it is using.

What the Code Does




glutInit (&argc, argv) initializes Open-GL Toolkit glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB) allocates a single display buffer and uses colors to draw glutInitWindowSize (640, 480) makes the window 640 pixels wide by 480 pixels high

What the Code Does (2)




glutInitWindowPosition (100, 150) puts upper left window corner at position 100 pixels from left edge and 150 pixels down from top edge glutCreateWindow (my first attempt) opens and displays the window with the title my first attempt Remaining functions register callbacks

What the Code Does (3)




The call-back functions you write are registered, and then the program enters an endless loop, waiting for events to occur. When an event occurs, GL calls the relevant handler function.

Effect of Program

Drawing Dots in OpenGL




We start with a coordinate system based on the window just created: 0 to 679 in x and 0 to 479 in y. OpenGL drawing is based on vertices (corners). To draw an object in OpenGL, you pass it a list of vertices.

The list starts with glBegin(arg); and ends with glEnd(); Arg determines what is drawn. glEnd() sends drawing data down the
OpenGL pipeline.

Example


glBegin (GL_POINTS);

glVertex2i (100, 50); glVertex2i (100, 130); glVertex2i (150, 130);


glEnd(); GL_POINTS is constant built-into OpenGL (also GL_LINES, GL_POLYGON, ) Complete code to draw the 3 dots is in Fig. 2.11.

  

Display for Dots

What Code Does: GL Function Construction

Example of Construction
 

glVertex2i () takes integer values glVertex2d () takes floating point values OpenGL has its own data types to make graphics device-independent

Use these types instead of standard ones

Open-GL Data Types


suffix data type b s i f d ub us ui 8-bit integer 16-bit integer 32-bit integer 32-bit float 64-bit float 8-bit unsigned number 16-bit unsigned number 32-bit unsigned number C/C++ type signed char Short int or long Float Double unsigned char unsigned short unsigned int or unsigned long OpenGL type name GLbyte GLshort GLint, GLsizei GLfloat, GLclampf GLdouble,GLclampd GLubyte,GLboolean GLushort GLuint,Glenum,GLbitfield

Setting Drawing Colors in GL


 glColor3f(red,

green, blue); // set drawing color

glColor3f(1.0, 0.0, 0.0); glColor3f(0.0, 1.0, 0.0); glColor3f(0.0, 0.0, 1.0); glColor3f(0.0, 0.0, 0.0); glColor3f(1.0, 1.0, 1.0); glColor3f(1.0, 1.0, 0.0); glColor3f(1.0, 0.0, 1.0);

// red // green // blue // black // bright white // bright yellow // magenta

Setting Background Color in GL




glClearColor (red, green, blue, alpha);

Sets background color. Alpha is degree of transparency; use 0.0 for


now.


glClear(GL_COLOR_BUFFER_BIT);

clears window to background color

Setting Up a Coordinate System


void myInit(void) { glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluOrtho2D(0, 640.0, 0, 480.0); } // sets up coordinate system for window from (0,0) to (679, 479)

Drawing Lines


glBegin (GL_LINES); //draws one line

glVertex2i (40, 100); glVertex2i (202, 96);

// between 2 vertices

  

glEnd (); glFlush(); If more than two vertices are specified between glBegin(GL_LINES) and glEnd() they are taken in pairs, and a separate line is drawn between each pair.

Line Attributes
  

Color, thickness, stippling. glColor3f() sets color. glLineWidth(4.0) sets thickness. The default thickness is 1.0. a). thin lines lines b). thick lines c). stippled

Setting Line Parameters


 

Polylines and Polygons: lists of vertices. Polygons are closed (right); polylines need not be closed (left).

Polyline/Polygon Drawing
 

glBegin (GL_LINE_STRIP); // GL_LINE_LOOP to close polyline (make it a polygon)

// glVertex2i () calls go here


  

glEnd (); glFlush (); A GL_LINE_LOOP cannot be filled with color

Examples


Drawing line graphs: connect each pair of (x, f(x)) values Must scale and shift

Examples (2)


Drawing polyline from vertices in a file

# polylines # vertices in first polyline Coordinates of vertices, x y, one pair per line Repeat last 2 lines as necessary
 

File for dinosaur available from Web site Code to draw polylines/polygons in Fig. 2.24.

Examples (3)

Examples (4)


Parameterizing Drawings: allows making them different sizes and aspect ratios Code for a parameterized house is in Fig. 2.27.

Examples (5)

Examples (6)
 

Polyline Drawing Code to set up an array of vertices is in Fig. 2.29. Code to draw the polyline is in Fig. 2.30.

Relative Line Drawing




 

 

Requires keeping track of current position on screen (CP). moveTo(x, y); set CP to (x, y) lineTo(x, y); draw a line from CP to (x, y), and then update CP to (x, y). Code is in Fig. 2.31. Caution! CP is a global variable, and therefore vulnerable to tampering from instructions at other points in your program.

Drawing Aligned Rectangles




glRecti (GLint x1, GLint y1, GLint x2, GLint y2); // opposite corners; filled with current color; later rectangles are drawn on top of previous ones

Aspect Ratio of Aligned Rectangles




Aspect ratio = width/height

Filling Polygons with Color




Polygons must be convex: any line from one boundary to another lies inside the polygon; below, only D, E, F are convex

Filling Polygons with Color (2)




glBegin (GL_POLYGON);

//glVertex2f (); calls go here


 

glEnd (); Polygon is filled with the current drawing color

Other Graphics Primitives




GL_TRIANGLES, GL_TRIANGLE_STRIP, GL_TRIANGLE_FAN GL_QUADS, GL_QUAD_STRIP

Simple User Interaction with Mouse and Keyboard




Register functions:

glutMouseFunc (myMouse); glutKeyboardFunc (myKeyboard);


Write the function(s) NOTE that any drawing you do when you use these functions must be done IN the mouse or keyboard function (or in a function called from within mouse or keyboard callback functions).

 

Example Mouse Function




 

void myMouse(int button, int state, int x, int y); Button is one of GLUT_LEFT_BUTTON, GLUT_MIDDLE_BUTTON, or GLUT_RIGHT_BUTTON. State is GLUT_UP or GLUT_DOWN. X and y are mouse position at the time of the event.

Example Mouse Function (2)




The x value is the number of pixels from the left of the window. The y value is the number of pixels down from the top of the window. In order to see the effects of some activity of the mouse or keyboard, the mouse or keyboard handler must call either myDisplay() or glutPostRedisplay(). Code for an example myMouse() is in Fig. 2.40.

Polyline Control with Mouse




Example use:

Code for Mouse-controlled Polyline

Using Mouse Motion Functions


  

glutMotionFunc(myMovedMouse); // moved with button held down glutPassiveMotionFunc(myMovedMouse ); // moved with buttons up myMovedMouse(int x, int y); x and y are the position of the mouse when the event occurred. Code for drawing rubber rectangles using these functions is in Fig. 2.41.

Example Keyboard Function


void myKeyboard(unsigned char theKey, int mouseX, int mouseY) { GLint x = mouseX; GLint y = screenHeight - mouseY; // flip y value switch(theKey) {case p: drawDot(x, y); break; // draw dot at mouse position case E: exit(-1); //terminate the program default: break; // do nothing } }

Example Keyboard Function (2)




Parameters to the function will always be (unsigned char key, int mouseX, int mouseY). The y coordinate needs to be flipped by subtracting it from screenHeight. Body is a switch with cases to handle active keys (key value is ASCII code). Remember to end each case with a break!

Using Menus


Both GLUT and GLUI make menus available. GLUT menus are simple, and GLUI menus are more powerful. We will build a single menu that will allow the user to change the color of a triangle, which is undulating back and forth as the application proceeds.

GLUT Menu Callback Function


  

Int glutCreateMenu(myMenu); returns menu ID void myMenu(int num); //handles choice num void glutAddMenuEntry(char* name, int value); // value used in myMenu switch to handle choice void glutAttachMenu(int button); // one of GLUT_RIGHT_BUTTON, GLUT_MIDDLE_BUTTON, or GLUT_LEFT_BUTTON

Usually GLUT_RIGHT_BUTTON

GLUT subMenus


Create a subMenu first, using menu commands, then add it to main menu.

A submenu pops up when a main menu item is selected.

glutAddSubMenu (char* name, int menuID); // menuID is the value returned by glutCreateMenu when the submenu was created Complete code for a GLUT Menu application is in Fig. 2.44. (No submenus are used.)

GLUI Interfaces and Menus

GLUI Interfaces


An example program illustrating how to use GLUI interface options is available on book web site. Most of the work has been done for you; you may cut and paste from the example programs in the GLUI distribution.

UNIT-IV

RENDERING

Polygon shading model




Flat shading - compute lighting once and assign the color to the whole (mesh) polygon

Flat shading


 

Only use one vertex normaland material property to compute the color for the polygon Benefit: fast to compute Used when:

Polygon is small enough Light source is far away (why?) Eye is very far away (why?)

Mach Band Effect


 

Flat shading suffers from mach band effect Mach band effect human eyes accentuate the discontinuity at the boundary

perceived intensity

Side view of a polygonal surface

Smooth shading
 

Fix the mach band effect remove edge discontinuity Compute lighting for more points on each face

Flat shading

Smooth shading

Smooth shading


Two popular methods:

Gouraud shading (used by OpenGL) Phong shading (better specular highlight, not
in OpenGL)

Gouraud Shading


 

The smooth shading algorithm used in OpenGL glShadeModel(GL_SMOOTH) Lighting is calculated for each of the polygon vertices Colors are interpolated for interior pixels

Gouraud Shading
  

Per-vertex lighting calculation Normal is needed for each vertex Per-vertex normal can be computed by averaging the adjust face normals
n1 n3 n n4 n2 n = (n1 + n2 + n3 + n4) / 4.0

Gouraud Shading


Compute vertex illumination (color) before the projection transformation Shade interior pixels: color interpolation (normals are not needed) C1
for all scanlines
Cb = lerp(C1, C3)

Ca = lerp(C1, C2)

C2

C3 Lerp(Ca, Cb) * lerp: linear interpolation

Gouraud Shading


Linear interpolation
x= a / (a+b) * v1 + b/(a+b) * v2

a


x v1 v2 Interpolate triangle color: use y distance to interpolate the two end points in the scanline, and use x distance to interpolate interior pixel colors

Gouraud Shading Problem




Lighting in the polygon interior can be inaccurate

Phong Shading


 

Instead of interpolation, we calculate lighting for each pixel inside the polygon (per pixel lighting) Need normals for all the pixels not provided by user Phong shading algorithm interpolates the normals and compute lighting during rasterization (need to map the normal back to world or eye space though)

Phong Shading


Normal interpolation
na = lerp(n1, n2)

n1
nb = lerp(n1, n3)

lerp(na, nb)

n2 n3

Slow not supported by OpenGL and most graphics hardware

UNIT-V
FRACTALS

Fractals
 

  

Fractals are geometric objects. Many real-world objects like ferns are shaped like fractals. Fractals are formed by iterations. Fractals are self-similar. In computer graphics, we use fractal functions to create complex objects.

Koch Fractals (Snowflakes)


1/3 1/3

1/3

1/3

Generator

Iteration 0

Iteration 1

Iteration 2

Iteration 3

Fractal Tree

Generator

Iteration 1

Iteration 2

Iteration 3

Iteration 4

Iteration 5

Fractal Fern

Generator

Iteration 0

Iteration 1

Iteration 2

Iteration 3

Add Some Randomness




The fractals weve produced so far seem to be very regular and artificial. To create some realism and variability, simply change the angles slightly sometimes based on a random number generator. For example, you can curve some of the ferns to one side. For example, you can also vary the lengths of the branches and the branching factor.

Terrain (Random Mid-point Displacement)


 

Given the heights of two end-points, generate a height at the mid-point. Suppose that the two end-points are a and b. Suppose the height is in the y direction, such that the height at a is y(a), and the height at b is y(b). Then, the height at the mid-point will be: r is the random offset This is how to generate the random offset r:
r = srg|b-a|, where ymid = (y(a)+y(b))/2 + r, where

s is a user-selected roughness factor, and rg is a Gaussian random variable with mean 0 and variance 1

How to generate a random number with Gaussian (or normal) probability distribution
// given random numbers x1 and x2 with equal distribution from -1 to 1 // generate numbers y1 and y2 with normal distribution centered at 0.0 // and with standard deviation 1.0. void Gaussian(float &y1, float &y2) { float x1, x2, w; do { x1 = 2.0 * 0.001*(float)(rand()%1000) - 1.0; x2 = 2.0 * 0.001*(float)(rand()%1000) - 1.0; w = x1 * x1 + x2 * x2; } while ( w >= 1.0 ); w = sqrt( (-2.0 * log( w ) ) / w ); y1 = x1 * w; y2 = x2 * w; }

Procedural Terrain Example

Building a more realistic terrain


 

 

Notice that in the real world, valleys and mountains have different shapes. If we have the same terrain-generation algorithm for both mountains and valleys, it will result in unrealistic, alien-looking landscapes. Therefore, use different parameters for valleys and mountains. Also, can manually create ridges, cliffs, and other geographical features, and then use fractals to create detail roughness.

Fractals
  

Infinite detail at every point Self similarity between parts and overall features of the object Zoom into Euclidian shape Zoomed shape see more detail eventually smooths Zoom in on fractal See more detail Does not smooth Model Terrain, clouds water, trees, plants, feathers, fur, patterns General equation P1=F(P0), P2 = F(P1), P3=F(P2) P3=F(F(F(P0)))

Self similar fractals


Parts are scaled down versions of the entire object

use same scaling on subparts use different scaling factors for subparts


Statistically self-similar

Apply random variation to subparts


Trees, shrubs, other vegetation

Fractal types


Statistically self-affine

random variations

Sx<>Sy<>Sz


terrain, water, clouds Invariant fractal sets

Nonlinear transformations

Self squaring fractals



Julia-Fatou set Squaring function in complex space Mandelbrot set Squaring function in complex space Inversion procedures

Self-inverse fractals

x=>x2+c x=a+bi Complex number Modulus Sqrt(a2+b2) If modulus < 1 Squaring makes it go toward 0 If modulus > 1 Squaring falls towards infinity If modulus=1 Some fall to zero Some fall to infinity Some do neither Boundary between numbers which fall to zero and those which fall to infinity Julia-Fatou Set

Julia-Fatou and Mandelbrot


Julia-Fatou

Foley/vanDam Computer Graphics-Principles and Practices, 2nd edition

Foley/vanDam Computer Graphics-Principles and Practices, 2nd edition

Julia Fatou and Mandelbrot cond


 

Shape of the Julia-Fatou set based on c To get Mandelbrot set set of non-diverging points Correct method Compute the Julia sets for all possible c Color the points black when the set is connected and white when it is not connected Approximate method Foreach value of c, start with complex number 0=0+0i Apply to x=>x2+c Process a finite number of times (say 1000) If after the iterations is is outside a disk defined by modulus>100, color the points of c white, otherwise color it black.

Constructing a deterministic self-similar fractal




Initiator

Given geometric shape




Generator

Pattern which replaces subparts of initiator




Koch Curve
First iteration

Initiator

generator

Fractal dimension


D=fractal dimension Amount of variation in the structure Measure of roughness or fragmentation of the object Small d-less jagged Large d-more jagged Self similar objects nsd=1 (Some books write this as ns-d=1) s=scaling factor n number of subparts in subdivision d=ln(n)/ln(1/s) [d=ln(n)/ln(s) however s is the number of segments versus how much the main segment was reduced I.e. line divided into 3 segments. Instead of saying the line is 1/3, say instead there are 3 sements. Notice that 1/(1/3) = 3] If there are different scaling factors

Skd=1

K=1

Figuring out scaling factors I prefer: ns-d=1 :d=ln(n)/ln(s)




Dimension is a ratio of the (new size)/(old size)

Kochs snowflake

After division have 4 segments

Divide line into n identical segments

n=s

Divide lines on square into small squares by dividing each line into n identical segments

n=4 (new segments) s=3 (old segments) Fractal Dimension

D=ln4/ln3 = 1.262

For your reference: Book method

n=4

Number of new segments segments reduced by 1/3

n=s2 small squares Get n=s3 small cubes

Divide cube

s=1/3 d=ln4/ln(1/(1/3))

Sierpinski gasket Fractal Dimension




Divide each side by 2

Makes 4 triangles We keep 3 Therefore n=3

Get 3 new triangles from 1 old


triangle

s=2 (2 new segments from one old segment) D=ln(3)/ln(2) = 1.585

Fractal dimension

Cube Fractal Dimension




Apply fractal algorithm

Divide each side by 3 Image from Now push out the middle face of each cube Angel book Now push out the center of the cube Well we have 20 cubes, where we used to have 1

What is the fractal dimension?

n=20
We have divided each side by 3

s=3
Fractal dimension ln(20)/ln(3) = 2.727

Language Based Models of generating images


 

Typical Alphabet {A,B,[,]}  B Rules  A[B]AA[B]

A=> AA B=> A[B]AA[B]

AA[A[B]AA[B]]AAAA[A[B]AA[B]]
AA A B A A A A A B A A

 

Starting Basis=B Generate words


Represents sequence of segments in graph structure Branch with brackets Interesting, but I want a tree

B B A A A

AA B

Language Based Models of generating images cond


 

Modify Alphabet {A,B,[,],(,)} Rules

  

B A[B]AA(B) AA[A[B]AA(B)]AAAA(A[B]AA(B))

A=> AA B=> A[B]AA(B) [] = left branch () = right branchStarting Basis=B

B AA A A A A AA B A A A A B

Generate words
Represents sequence of segments in graph structure Branch with brackets

B B A A A B

Language Based models have no inherent geometry


B AA A A A A A AA B A A A B

Grammar based model requires Grammar Geometric interpretation Generating an object from the word is a separate process examples Branches on the tree drawn at upward angles Choose to draw segments of tree as successively smaller lengths The more it branches, the smaller the last branch is Draw flowers or leaves at terminal nodes

Grammar and Geometry




Change branch size according to depth of graph

Foley/vanDam Computer Graphics-Principles and Practices, 2nd edition

Particle Systems


System is defined by a collection of particles that evolve over time Particles have fluid-like properties Flowing, billowing, spattering, expanding, imploding, exploding Basic particle can be any shape Sphere, box, ellipsoid, etc Apply probabilistic rules to particles generate new particles Change attributes according to age What color is particle when detected? What shape is particle when detected? Transparancy over time? Particles die (disappear from system) Movement Deterministic or stochastic laws of motion Kinematically forces such as gravity

Particle Systems modeling




Model Fire, fog, smoke, fireworks, trees, grass, waterfall, water spray. Grass Model clumps by setting up trajectory paths for particles Waterfall Particles fall from fixed elevation Deflected by obstacle as splash to ground Eg. drop, hit rock, finish in pool Drop, go to bottom of pool, float back up.

Physically based modeling

 

Non-rigid object Rope, cloth, soft rubber ball, jello Describe behavior in terms of external and internal forces Approximate the object with network of point nodes connected by flexible connection Example springs with spring constant k Homogeneous object All ks equal Hookes Law Fs=-k x x=displacement, Fs = restoring force on spring Could also model with putty (doesnt spring back) Could model with elastic material Minimize strain energy

k k k k

Turtle can

Turtle Graphics
F=Move forward a unit L=Turn left R=Turn right

 

Stipulate turtle directions, and angle of turns Equilateral triangle

Eg. angle =120 FRFRFR

What if change angle to 60 degrees


F=> FLFRRFLF Basis F

Koch Curve (snowflake)

Example taken from Angel book

Using turtle graphics for trees




  

Use push and pop for side branches [] F=> F[RF]F[LF]F Angle =27 Note spaces ONLY for readability F[RF]F[LF]F [RF[RF]F[LF]F] F[RF]F[LF]F [LF[RF]F[LF]F] F[RF]F[LF]F

Potrebbero piacerti anche