Sei sulla pagina 1di 6

Than Lwin Aung Introduction to 3D Graphic

Introduction 3D Graphic
Nowadays, 3D graphics is everywhere ranging from computer simulation through computer games to 3D
imaging. In fact, rendering 3D objects onto 2D screen, such monitors, projectors etc. requires different
processes: Geometric Representation and Processing, Shading and Rasterization. Here I am not
following a specific graphics pipe-line, such as Direct3D or OpenGL. I am just describing the general
graphics processing from a developer perspective. (For your information, I attach both graphics pipe-
lines at the end.)

Geometrical Representation and Processing


It is primarily concerned with drawing geometric surfaces and 3D objects. Every 3D objects are made
up of surfaces. For example, the following cube is made up of 6 surfaces.

The question is how we define a surface in 3D co-ordinate system. Geometrically, a triangle is the
simplest form of surface. By combining triangle surfaces, we can build every form of surfaces. For
example:

Simply enough, a triangle is made up of 3 points. The follow figure shows how each point is defined in
3D coordinates.

1
Than Lwin Aung Introduction to 3D Graphic

We define the points of the triangle as vertices. In fact, a vertex stores more information than the
position of the point. In addition to the position of the point, it also stores color, normal vector at the
point, and texture information. Therefore a vertex stores: position, color, normal and texture. Position
can be represented with 3D vector <x,y,z>; color can be represented with RGBA (Red, Green, Blue and
Alpha) ; normal can also be represented with 3D vector <x,y,z>; texture can be represent of 2D vector
<x,y>.

The question here is why we only need to define the color information only at the vertices of the
triangle. How about the points inside the triangle? Well, we interpolate the color and texture
information for the points between vertices. For example, in the follow triangle, we only define the
colors at 3 vertices and the color is interpolated between 3 vertices.

Therefore, it is pretty straightforward to understand the color information of the vertices. Next is the
normal information. The normal vector, in fact, represents the perpendicular vector of the triangle
surface. Since 3 vertices form a triangle surface, we can calculate the normal vector by performing cross-
product between 2 positions vectors. For example:

Normal = Vertex1 <x1,y1,z1> X Vertex2 <x2,y2,z2>

Why do we need normal vector for the surface? Well, the answer is for a lot of things although the
primary purpose is to calculate light reflection, refraction and shading. We also need normal vector for
calculating the orientation of the surface.

The last information each vertex store is the texture information. What is texture by the way? A texture
in fact is nothing more an image. We use texture mapping to paste the image on the 3D surface. In fact,
texture mapping saves us a lot of work for 3D modeling. How do we do texture mapping? It is in fact
simple. Before going to the texture mapping, let me ask you a question. Are you familiar with the world
map and the globe? The world map is 2D and the globe is 3D, right? How come? We in fact project the
2D surface onto the 3D surface and vice versa. This is called texture mapping. Mathematically,

<x,y,z> = R(<u,v>) , where R is a linear function which transform 2D (u,v) to 3D (x,y,z). There are various
functions of texture transformation depending on the surfaces.

2
Than Lwin Aung Introduction to 3D Graphic

Once we understand the vertices and their position, color, normal and texture, we can draw 3D objects
by creating different vertices and connect them together. In fact, there are a lot of 3D modeling tools
available to assist you to draw different 3D objects.

Now suppose we have a 3D object called my cube, which looks like:

What are we going to do with it? Well, we have to put it in the 3D world. 3D world is in fact the world
which has the absolute frame of reference (rather space but I feel space is too abstract for me) for
different 3D models. From now on, I will use the term model to describe a 3D object. In fact, the World
provides a frame of reference for different 3D models. Here each 3D model has their own frame of
reference; in fact, they can rotate, spin, and shrink in its own frame of reference. In the following figure,
my cube is put in the World Space along with another cube. But my cube is rotated in its own space.

We now have 2 spaces: Model Space and the World Space. In fact, the World Space not only provides
the space for the 3D models to be put, but also for other things, such as the Camera (Eye) and the Light
Source. Why do we need the Camera and the Light Source? Well, without them, how will we see our
beautiful 3D objects?

3
Than Lwin Aung Introduction to 3D Graphic

So far, we are still in our 3D world. But since the final result of the graphics has to be in 2D display. We
will have to transform 3D models to 2D space. This is where the camera and the projection come in. The
camera is nothing more than an eye where we see our 3D world from that point.

We can freely move the camera in the World Space. We can rotate the camera in its own axis (or space).
Now we have 3 spaces: Model Space, World Space and Camera Space (rather View Space). By changing
the position of the camera, we will see the different part of our 3D world. But what we see from the
camera is 2D, like a photo. So, the camera plays an important role of transforming the 3D world to the
2D space. How does the camera transform 3D to 2D? Well, the answer is through the Projection.

In the figure, the red rectangle is called the far plane, the green one is called the near plane and the
black one is the Projection Plane. The projection plane in fact is like a film, where 3D models are
displayed mathematically. The far plane defines the background of the display and the near plane
defines the foreground of the display. Let me explain here a little bit. Actually, there is a difference
between our 3D World and the real world. In the real world, our far plane locates at infinity which
means we can see the objects from the infinity as long as the light rays from them reach our eyes.
However, our 3D world is not infinite. Therefore, the far plane defines the horizon of the display.
Likewise, the near plane defines the closest distance we can see. (In the real world, if the object is too
close to us, we won’t be able to see. Does it make sense?) All the 3D models within the far-plane and
the near-plane will be projected onto the projection plane. The line of sight from the near-plane to the
far plane is distorted (it is non-linear), thereby creating a perspective view of the 3D world. By changing
the sizes and distances of the far and near planes, we can form different perspective projection. It is our
projection space. Now we have 4 spaces: Model Space, World Space, View Space, and Projection Space.

4
Than Lwin Aung Introduction to 3D Graphic

All these spaces define the geometric representation of the 3D models and surfaces, which will be
further processed in Shading and Rasterization.

Shading
Shading is primarily concerned with assigning color to our 3D objects. What shader does is to colorize
and shades our 3D models so that it looks more aesthetic. A typical shader usually performs light
reflection, refraction, shadowing, surface smoothing, and texture mapping etc. Shaders can be divided
into 3 main groups: per triangle shading, per vertex shading and per pixel shading. In per triangle
shading, the shader assigns a color to the whole triangle surface. In per vertex shading, the shader
assign a color to each vertex and in per pixel shading, the shader assign a color to each pixel. In fact, we
can write different shader program to perform different shading effect.

Rasterization
Rasterization is primarily concerned with rendering the 2D graphic objects on the screen. It is in fact the
final phase of the graphics pipe-line. The Shader produces colored geometric objects to be rendered on
the 2D screen. Rasterization computes color information at each pixel and put it to the frame buffer.
From the frame buffer, the graphics output device display the final result.

References
[1] http://www.graphics.cornell.edu/online/tutorial/

[2] http://msdn.microsoft.com/en-us/library/bb219679%28v=vs.85%29.aspx

[3] http://www.cs.cmu.edu/~jkh/462_s07/02_opengl.pdf

[4] http://graphics.stanford.edu/courses/cs448a-01-fall/lectures/lecture2/walk006.html

5
Than Lwin Aung Introduction to 3D Graphic

Direct 3D Graphics Pipe-line Architecture

Potrebbero piacerti anche