Sei sulla pagina 1di 12

1

COS3712
May/June 2011

COS3712
May /June 2011
COMPUTER SCIENCE
COMPUTER GRAPHICS
Duration: 2 hours
Total: 70 marks
Examiners:
First: Mr L Aron
Second: Mr C Dongmo
External: Dr P Marais (University of Cape Town)
...........................................................................................................
...........................................................................................................
MEMORANDUM

QUESTION 1

COS3712
May/June 2011

[10]

a) Differentiate between the object oriented and image oriented pipeline implementation
strategies and discuss the advantages of each approach? What strategy does OpenGL use?
[6]
In the object oriented strategy vertices are defined by a program and flow through a sequence
of modules that tranforms them, colors them, and determines where they are visible etc. The
output is the pixels in a frame buffer.
Advantage hardware to build an object based system is fast and relatively inexpensive.
Image oriented approaches loop over pixels, or rows of pixels called scanlines. For each
pixel we work backwards determining which geometric primitives can contribute to its colour.
Advantages we need only limited display memory at any time we can hope to generate
pixels at the rate and in the order required to refresh the display handles global effects well.
OpenGL uses the object oriented strategy.
b) Give brief definitions of the following terms in the context of computer graphics:
i)

Antialiasing

[2]

In the conversion between analog values of object or world coordinates and colours to the
discrete values of screen coordinates and colours, rasterised (rendered) line segments and
edges of polygons often become jagged or pixels that contrast too strongly with there
neighbourhood are displayed.. This can be prevented by using antialiasing. Antialiasing
blends and smooths points, lines, or polygons to get rid of sharp contrasts or other
unwanted patterns.
ii)

Interpolation

[2]

Interpolation is a way of determining value (of some parameter) for any point between two
endpoints of which the parameter values are known (e.g. the colour of any points between
two points, or the normal of any point between two points)

QUESTION 2

COS3712
May/June 2011

[6]

Consider the diagram below and answer the question that follows:

Give a matrix which will transform the square ABCD to the square ABCD. Show all workings.
Hint : Below is the transformation matrix for clockwise and anticlockwise rotation about the zaxis.

cos

sin
0

sin
cos
0
0

0 0

0 0
1 0

0 1

Clockwise rotation matrix

cos

sin
0

sin
cos
0
0

0 0

0 0
1 0

0 1

Anti-clockwise rotation Matrix


[6]

COS3712
May/June 2011

Answer:
Other permutations are possible and hence each answer must be looked at seperately. Final
matrix is the same:
Small square is translated by (4,1,0), then rotated clockwise by 45.
Small square has length 2 on each side. Large square has length of 3 hence scaling factor is
3/2 on the x and y axis. Three matrices are multiplied as follows:

0
0

0 0 4

1 0 1
0 1 0

0 0 1

2/2
2/2
0
0

3/ 2

3 / 2

2 /2
2/2
0
0

2 / 2 0 0 3 / 2

2 / 2 0 0 0
0
1 0 0

0
0 1 0

2 / 2 0 4 3 / 2

2 / 2 0 1 0
0
1 0 0

0
0 1 0

3/ 2

3/ 2

1
0

0
3/ 2
0
0

0
3/ 2
0
0

0 0

0 0
1 0

0 1

0 0

0 0
1 0

0 1

QUESTION 3
a)

COS3712
May/June 2011

[8]

Define the term View Volume with respect to computer graphics and with reference to
both perspective and orthogonal views.
[4]

The view volume is analogous to the volume that a real camera would see through its lens
(except that it is also limited in distance from the front and back). It is a section of 3D space that
is visible from the camera or viewer between two distances.
When using orthogonal (or parallel) projection, the view volume is rectangular.
When using perspective projection, the view volume is a frustum and has a truncated pyramid
shape.
b)

Orthogonal, oblique and axonometric view scenes are all parallel view scenes. Explain
the differences between orthogonal, axonometric, and oblique view scenes.
[4]

Orthogonal views - projectors are perpendicular to the projection plane and projection plane is
parallel to one of the principal faces of an object. A single orthogonal view is restricted to one
principal face of an object.
Axonometric view - projectors are perpendicular to the projection plane but projection plane can
have any orientation with respect to object.
Oblique projection - projectors are parallel but can make an arbitrary angle to the projection
plane and projection plane can have any orientation with respect to object.

QUESTION 4
a)

[16]

Explain what diffuse reflection is in the real world.

[1]

Rough surfaces that scatter(reflect) light in all directions


b)

State and explain Lamberts Law using a diagram.

[3]

COS3712
May/June 2011

Lamberts Law states that the amount of diffuse light reflected is directly proportional to cos
where is the angle between the normal at the point of interest and the direction of the light
source.
Rd cos
c)

Using lamberts Law derive the equation for calculating approximation s to diffuse
reflection on a computer.
[3]

If we consider the direction of the light source(l) and the normal at the point of interest(n) to be unitlength vectors, then cos = l n
If we add a reflection component kd representing the fraction of incoming diffuse light that is
reflected , we have the diffuse reflection term:
Id = kd (l n) Ld , where L is the light source.
c)

Describe the distinguishing features of ambient, point, spot and distant light sources.
[6]
An ambient light source produces light of constant intensity throughout the scene. All
objects are illuminated from all sides.
A point light source emits light equally in all directions, but the intensity of the light
diminishes with (i.e. proportional to the inverse square of) the distance between the light
and the objects it illuminates. Surfaces facing away from the light source are not
illuminated.
A spot light source is similar to a point light source except that its illumination is
restricted to a cone in a particular direction. A distant light source is like a point light
source except that the rays of light are all parallel.

e)

Why does Phong shaded images appear smoother than Smooth(Gouraud) and Flat shaded
images?
[3]

COS3712
May/June 2011

In flat shading a polygon is filled with a single colour or shade across its surface. A single normal
is calculated for the whole surface, and this determines the colour. In smooth shading colour per
vertex is calculated using vertex normals and then this colour is interpolated across the polygon.
In Phong shading, the normals at the vertices are interpolated across the surface of the polygon.
The lighting model is then applied at every point of within the polygon. Because normals gives the
local surface orientation, by interpolating the normals across the surface of a polygon, the surface
appears to be curved rather than flat hence the smoother appearance of Phong shaded images.

QUESTION 5
a)

[10

In the case of the z-buffer algorithm for hidden surface removal, is shading performed
before or after hidden surfaces are eliminated? Explain.
[2]
Shading is performed before hidden surface removal. In the z-buffer algorithm polygons are
first rasterized and then for each fragment of the polygon depth values are determined and
compared to the z-buffer.

b)

The simplest scan-conversion algorithm for line segments has become known as the
DDA algorithm. Give the algorithm in pseudocode form and explain briefly how it is
derived.
[6]

The most simple and straightforward approach for drawing a line is using the Line Equation:
, where m is the lines slope defined as:
. As you can see the equation has
two constants (m and n) and two variables (x and y). By looping and incrementing the x variable
the y variable is calculated.
DDA is used to eliminate the recalculation of the y value by assuming that the x value is always
incremented since we are drawing it on the computer screen. So if we take the start point
and the end point
so
and we know that
which gives us:
that is
.
Algorithm:
for(i = x0 ; i <= x1; i++)
{
y+=m
write_pixel(i, round(y), line_color)}
}

Although x is an integer, y is not because m is a floating point number, hence y needs to be


rounded off.

8
c)

COS3712
May/June 2011

Bresenham derived a line-rasterization algorithm that has become the standard approach
used in hardware and software rasterizers as opposed to the more simpler DDA
algorithm. Why is this so?
[2]

The DDA algorithm is efficient and can be coded easily, but it requires a floating-point addition
for each pixel generated. Bresenhams algorithm avoids all floating point calculations.

QUESTION 6
a)

[8]

Discuss the difference between the RGB colour model and the indexed colour model with
respect to the depth of the frame (colour) buffer.
[4]

In both models, the number of colours that can be displayed depends on the depth of the frame
(colour) buffer. The RGB model is used when a lot of memory is available, eg 12 or 24 bits per
pixel. These bits are divided into three groups, representing the intensity of red, green and blue
at the pixel, respectively. The RGB model becomes unsuitable when the depth is small, because
shades become too distinct/discreet.
The indexed colour model is used where memory in the colour buffer is limited. The bits per
pixel are used to index into a colour-lookup table where any shades of any colours can be
specified (depending only on the colours that the monitor can show).
b)

Define the following terms and briefly explain their use in computer graphics.
i)

Bump mapping

[2]

ii)

Mipmapping

[2]

i) Bump mapping
Angel Section 8.12.2: Whereas texture maps give detail by mapping patterns onto surfaces, bump
maps distort the normal vectors during the shading process to make the surface appear to have
small variations in shape, like bumps or depressions.

QUESTION 7

[12]

Consider the following program that draws two walls of a room that meet at a corner. In the
middle of the room is a rotating square. Answer the questions that follow. (Do not spend too
much time studying the code).
1. #include <gl/glut.h>
2. #include <fstream>
3. #include <cmath>
4. float eye_pos[3] = {100, 50, -100};
5. GLfloat angle = 0;
6. bool rotating = true;
7. bool clockwise = true;
8. void drawPolygon()
9. {
10.
glColor3f (1.0, 1.0, 0.0);
11.
glBegin(GL_POLYGON);// Draw square
12.
glVertex3d(20, 20, 0);
13.
glVertex3d(20, -20, 0 );
14.
glVertex3d(-20, -20, 0);
15.
glVertex3d(-20, 20, 0);
16.
glEnd();
17.}
18.void drawRoom()
19.{
20.
glColor3f (1.0, 1.0, 1.0); // left wall
21.
glBegin(GL_POLYGON);
22.
glVertex3d(0, 0, 0);
23.
glVertex3d(0, 100, 0 );
24.
glVertex3d(0, 100, -100);
25.
glVertex3d(0, 0, -100);
26.
glEnd();
27.
glBegin(GL_POLYGON);
// right wall
28.
glVertex3d(0, 0, 0);
29.
glVertex3d(100, 0, 0);
30.
glVertex3d(100, 100, 0);
31.
glVertex3d(0, 100, 0);
32.
glEnd();
33.}
34.void display()
35.{ glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
36.
glMatrixMode(GL_MODELVIEW);
37.
glLoadIdentity();
38.
gluLookAt(eye_pos[0], eye_pos[1], eye_pos[2], 0.0, 50.0, 0.0, 0.0, 1.0,
0.0);
39.
drawRoom();
40.
glTranslatef(40, 40, -40);
41.
glRotatef(angle, 0, 1, 0);
42.
drawPolygon();
43.
glFlush();
44.
glutSwapBuffers();}
45.void idle()

COS3712
May/June 2011

10
46.{
47.
48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.}

COS3712
May/June 2011

if (rotating)
{ if (!clockwise) // rotate counter-clockwise
{angle-= 0.5;
if (angle < -360.0)
angle += 360.0;}
else // rotate clockwise
{
angle+= 0.5;
if (angle > 360.0)
angle -= 360.0;}
}
glutPostRedisplay();

59.void myInit()
60.{
61.
glPolygonMode(GL_FRONT, GL_FILL);
62.
glMatrixMode(GL_PROJECTION);
63.
gluPerspective(90, -1, 10, 210);
64.}
65.int main(int argc, char** argv)
66.{
67.
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
68.
glutInitWindowSize(700, 700);
69.
glutCreateWindow("Room with floating shapes");
70.
myInit();
71.
glutDisplayFunc(display);
72.
glutIdleFunc(idle);
73.
glutMainLoop();
74.
return 0;
75.}

a)

Say the following function is inserted in the program, and is called between lines 64 and 65:
void setupMenus()
{
int main_menu = glutCreateMenu(mainMenuCallback);
glutAddMenuEntry("Rotate Clockwise", 0);
glutAddMenuEntry("Rotate anti-clockwise", 1);
glutAddMenuEntry("Pause Rotation", 2);
glutAddMenuEntry("Quit", 3);
glutAttachMenu(GLUT_RIGHT_BUTTON);
}

Write the callback function for this menu to do the following:


Rotate square clockwise
Rotate square anti-clockwise
Pause rotation of square
Exit Program
Hint : The rotation of the square is controlled by boolean variables rotating and clockwise.

11

COS3712
May/June 2011

(see lines 6 and 7 as well as function void idle ).


void mainMenuCallback(int id)
{
switch (id)
{
case 0:rotating = true;
clockwise = true;
break;
case 1:
rotating = true;
clockwise = false;
break;
case 2:
rotating = false;
break;
case 3:
exit(0);
}
}
b)

Write a keyboard callback function that will allow the user to zoom in towards the corner of
the room and back out again. Use the < to zoom in and > key to zoom out.. You do not need
to set maximum or minimum values for the camera position .The y value of the camera
position will remain constant and the zoom increment can be set to 3.0. You can assume that
glutKeyboardFunc(keyboard) is added to the main function
Hint: Take note of the initial position of the camera in line 4 relative to the visible corner of
the room.

void keyboard(unsigned char key, int x, int y)


{switch (key)
{case '<' :
eye_pos[0] += 3.0;//increment value can be different
eye_pos[2] -= 3.0;
break;
case '>' :
eye_pos[0] -= 3.0;
eye_pos[2] += 3.0;
break; }
return;
}

[TURN OVER]

12
c)

COS340A/COS3712
May/June 2011

Consider the following function for loading an image from a file into an array, and for
specifying it as a texture:
void setTexture()
{
const int WIDTH = 512;
const int HEIGHT = 1024;
GLubyte image[WIDTH][HEIGHT][3];
FILE * fd;
fd = fopen("texture.bmp","rb");
fseek(fd, 55, SEEK_SET); // Move file pointer past header info
fread(&image, 1, WIDTH*HEIGHT*3, fd);
fclose(fd);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,
GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,
GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, 3, WIDTH, HEIGHT, 0, GL_RGB,
GL_UNSIGNED_BYTE, image);
}

The file texture.bmp contains a 512x1024 image.If this function is inserted in the above
program (and called in the main function just before myinit is called), give the statements
that need to be inserted in the drawroom function (and state where) to apply the texture to the
surface of the right wall of the room .

In function drawroom, insert the following calls of glTexCoord2f before the respective calls of
gkVertex3f:
glTexCoord2f(0,
glTexCoord2f(0,
glTexCoord2f(1,
glTexCoord2f(0,

0); //after line 27


1);//after line 28
1); //after line 29
0); //after line 30

[TURN OVER]

Potrebbero piacerti anche