Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Reading
Required:
Hill Chapter 7
OpenGL Programming Guide (the red book) Chapter 3
3D Geometry Pipeline
Image-coordinate system
(2D device coordinate system, screen coordinates, raster coordinates)
OpenGL viewing
Modelview transformation
Modeling transformation:
model local coordinates world coordinates
Viewing transformation:
world coordinates eye coordinates
OpenGL viewing
Viewing transformation
gluLookAt(eye.x, eye.y, eye.z, center.x, center.y,
center.z, up.x, up.y, up.z)
Viewing direction:
center eye
Up is the upward direction
Viewing direction and up vector eye coordinate
system
X axis points to the right of viewer
Y axis points upward
Z axis points to the back of viewer
Generate a matrix, which is postmultiplied to the top-ofthe-stack matrix on the Modelview stack
Thus, must be called before any modeling
transformations
OpenGL viewing
Default OpenGL viewing (if no gluLookAt is specified)
Eye is at origin of world space
Looking down the negative z axis of world space
Up vector is positive y axis
The viewing transformation matrix is identity matrix
(i.e. eye coordinate system = world coordinate
system)
Projections
Projections transform points in n-space to m-space, where
m<n.
In 3D, we map points from 3-space to the projection plane
(PP) (a.k.a., image plane) along projectors (a.k.a., viewing
rays) emanating from the center of projection (COP):
Parallel projections
For parallel projections, we specify a direction of
projection (DOP) instead of a COP.
There are two types of parallel projections:
Orthographic projection DOP perpendicular to PP
Oblique projection --- DOP not perpendicular to PP
Parallel projections
Orthographic projections along the z-axis in 3D
x 1
y 0
k 0
1 0
Ignoring the z component:
0
1
0
0
0
0
0
0
0
0
k
x
y
z
1
x
y
z
x 1 0 0 0
y 0 1 0 0
1 0 0 0 1
1
We can use shear to line things up when doing an oblique projection
We often keep the initial z value around for later use. Why?
Perspective projection
How to represent the perspective projection as a matrix
equation?
x 1 0
*
y 0 1
w* 0 0
x
0 0 x
y
0 0
y
1 z z
0
d
1 d
*
*
x x / w
y y * / w *
1 w * / w *
x
d
z
y
d
z
1
Projective normalization
After applying the perspective transformation and
dividing by w, we are free to do a simple parallel
projection to get the 2D image.
Vanishing points
What happens to two parallel lines that are not parallel
to the projection plane?
Think of train tracks receding into the horizon
The equation for a line is:
Letting t go to infinity:
We get a point!
What happens to the line l = q + t v?
Each set of parallel lines intersect at a vanishing point
on the Projection Plane.
Q: how many vanishing points are there?
Summary
What you should take away from this lecture:
The meaning of all the boldfaced terms
An appreciation for the various coordinate systems used
in computer graphics
How the perspective transformation works
How we use homogeneous coordinates to represent
perspective projections
The classification of different types of projections
The concept of vanishing points
The properties of perspective transformations
OpenGL projection
glOrtho(), gluPerspective() or glFrustum()
Produce a matrix which is stored in the projection
matrix stack
All geometry objects are already transformed to the eye
coordinate system before projection transformation is
applied
The parameters of these functions are with respect to
the eye coordinate system
The parameters define 6 clipping planes
To simplify clipping, the viewing space is transformed
into a canonical view volume (all coordinates in [-1,
+1])
x
y
z
x
n
z
y
n
z
az b
y
z*
*
w
Then by perspective division:
y
z
1
x
y
z
x*
x
n
*
w
z
*
y
y
* n
w
z
*
z az b
*
z
w
z n
zf
z 1
z 1
f n
a
n f
2 fn
b
n f
2n
r l
0
0
M persp S
2n
t b
0
0
f n
n f
1
0
2 fn
n f
0
x
y
z
1 1, 1 1, 1 1
w
w
w
w x w, w y w, w z w
But only for point, not for triangle, so a pre-processing.
Z-buffer
Besides RGB values of a pixel, maintain some notion of
its depth
An additional channel in memory, like alpha
Called z-buffer or depth buffer
Probably the simplest and most widely used (in
hardware, e.g. GeForce cards)
When the time comes to draw a pixel, compare its depth
with the depth of whats already in the framebuffer.
Replace only if its closer
Rasterization
The process of filling in the pixels inside of a polygon is
called rasterization.
During rasterization, the z value and shade s can be
computed incrementally (i.e., quickly, taking advantage of
coherence)