Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
3-D Objects
Objects are virtual entities in a continuous environment Include points, lines, shapes in 2-D and volume in 3-D 3-D modeling involves representation, position, manipulation, lighting, and rendering
Co-ordinate systems
The object is located in the World Coordinate System (WCS) which is normally a righthanded Cartesian system Camera defines its Camera Coordinate System (CCS)
Camera y y left-handed view-point system z x
right-handed system
Operation on OCS
Each object has an anchor point (origin) and all its vertices are defined in the object coordinate system Matrix operation can be performed on the OCS
(x2,y2,z2)
(x1,y1,z1)
x z
(x2,y2,z2)
(x1,y1,z1) roll
y (x1,y1,z1)
(x2,y2,z2) z
y (x1,y1,z1) x
Deformation
Cannot be done by linear matrix operation General principle:
y ' = FY (x, y , z ) z' = FZ (x, y , z ) x' = FX (x, y , z )
Tapering
y ' = yF2 (z )
x' = xF1(z )
z' = z
Axis Twisting
Bending x' = x
y ' = sin (z 1/ k ) + y 0
z' = cos (z 1/ k ) + y 0
= k(y-y0)
zWCS
Scaling can also be done in WCS or OCS (preferred) Hierarchy and object-oriented approach preferred
Simple approach
Assume the camera is located at the origin of the WCS facing the z-direction Assume also a projection screen is located at a distance d behind the camera on the x-y plane
y Note that physically, the projected image is in the left-handed coordinate system (x1,y1,z1) y d x (x1,y1) x camera projection screen coordinate of projected image
d y1
y1 = y1d/z
x1 = x1d/z
x1
Perspective information
The distance d of the screen serves as a scaling factor The new coordinates are divided by the z value of the old system. Thus the further away the object, the smaller it is on the projection screen. The new coordinate system needs another variable z = z for each point to determine the front and back relationship. If 2 points at different distance results in the same (x,y), only the one with smaller z is displayed.
Field of viewing
The distance d also defines the viewing angle
y l
Viewable region
tan() = l/d
Camera position
Camera can be considered as an object and be placed at any location in the WCS by matrix transformation Question: for a point (x, y, z) in the WCS, what is its new position in CCS after camera motion?
This can be viewed as WCS moved w.r.t. CCS. We can perform the following transforms in same sequence A ( , , )XX A ( , , )XY A ( , , )XZ T X A ( , , ) A ( , , )YY A ( , , )YZ TY YX A ( , , )ZX A ( , , )ZY A ( , , )ZZ T Z
0 0 0 1
original screen
closeup screen
Example:
reference image
zoom in object
Depth of focus
In real situations, objects far from the focal planes should be blurred simulate this by applying blurring function that depends on z value
5 z1
y 4 6 0 0 3
1 2
Difficult to remove lines masked by other surfaces Not very common for modeling
Planar polygons
Formed by a chain of straight lines
Index Vertices Polygons 0 (0,0,0) (1,2,5, 4,1) 1 (0,0,2) (2,3,6, 5,2) 2 (4,0,2) (4,5,8, 9,4) 3 (3,0,0) (5,6,8, 5) 5 (0,1,2) (0,3,6, 7,0)
y 4 z 1
8 6 5 3 x 2
Difficult in representing curved surface Points are not free to move along different axes Example: reducing the z-coordinate of vertex (2) destroy the planar property on polygon (0)
4 5 ? 1 2
Planar triangles
Try to resolve the point motion in planar polygons by breaking it down to triangles
y
Index Vertices triangles 0 (0,0,0) (1,2,4) 1 (0,0,2) (2,4,5) 2 (4,0,2) (4,5,9) 3 (3,0,0) (5,8,9) 5 (0,1,2) (2,3,5)
8 6 5 3 x
4 z 1
Easier to edit object Still a problem: how to represent disjoint boundaries such as holes? Breakdown to more triangles Subtraction space may be needed
Boundary 2 Boundary 1
Sectional polygons
Slice a 3-D object into cross-sections, and store each cross-section in polygon form Object surface mesh is reconstructed by connecting contours with those above and below Require a central axis to align planes
Object Sectional representation Central axis 2 planes are necessary to represent a simple object like this
Number of planes can be varied depending on the complexity of an object Polar coordinates with the central axis as origin provide an easy description of planner polygons
holes
back-curving edges
Extrusion
One of the short-cut to overcome the tedious crosssection in building up complex models A 2-D cross-section is extended to create a cylinder-like 3-D object
y X-Section extrusion direction z z x x y result
Object of revolution
Begin with a cross-section of an object and then rotate the cross-section around a central axis Can represent a complicated object with very few data points The cross-section can be off-axis to form torus-like objects y
x z
Mathematical construction
A set of inequalities is used to determine if a point is inside or outside the solid boundary
points outside sphere: x2 + y2 + z2 > r2 points inside sphere: x2 + y2 + z2 < r2
Spatial subdivision
Concept similar to a 3-D bitmap, where objects are divided into small cubes Larger cubes are identified to save storage space
Definition of rendering
Viewing algorithm of how objects are drawn and visualized by a viewer Usually objects are rendered in order Different algorithms to handle different representations
Ray tracing
Traces light backward from eye to scene to light source Take into account perfect specular interaction Ray-traced examples:
http://www.cse.buffalo.edu/pub/WWW/povray/
Ray tracing
Use analogy from thermal heat diffusion to model the energy emitted from each surface patch Handle diffused interaction
http://freespace.virgin.net/hugo.elias/radiosity/radiosity.htm
www.siggraph.org/education/materials/ HyperGraph/radiosity/overview_1.htm
(px,py,pz) s = [px-x, py-y, pzz] Recall vector dot product -s n = |s||n| cos()
(x,y,z)
cos() = -s n / |s|
r r I (s n ) illumination at (x, y, z) = r3 s
Spot light
Similar to point source except that intensity falls off rapidly at location outside the cone of illumination
(px,py,pz)
n u s (x,y,z) (x0,y0,z0)
- Position: (px,py,pz) - Direction: s - Cone angle: A point given by u is within the illumination cone if < /2
r r (s u ) cos( ) = r r su
Shadows
Need a model to handle the interaction between light sources and objects Simple method: treat opaque objects as negative light sources to subtract out illumination The magnitude of the negative light source defines the transparency of the object surface to a third object
Note that the source color and surface color is in the subtraction color space
Diffused reflection
It simulates reflection by rough surfaces that incident lights are reflected in all directions Similar to ambient light reflection, perspective info due to distance and orientation w.r.t. light sources are included Easier to be represented using the HSV model such that the positional data can be represented by the intensity components without affecting the color
Example: surfaces are rendered by same color with different brightness
Specular reflection
Reflection generated by polished surfaces creates a highlight of the illuminating light source and surrounding environment Surface brightness is the sum of its ambient reflection together with specular highlight Perfectly reflected light follows the law of reflection with incident angle equal to the reflected angle Depending on different material, some small amount of diffusion due to micro-surface roughness may occur resulting in slightly blurred surface
(px,py,pz) s
(x,y,z)
R = IsourceK s cosg ( )
Isource is the source intensity KS is the specular coefficient g is the gross surface factor
(x,y,z)
Example: g=10 gives a rough plastic effect while g=150 gives metallic-like surface with small highlight Have to find in term of vectors in the Cartesian system calculated from relative coordinates, we have
Render order
In 2-D, the bottom-most plane is colored first and then color is applied upward towards the top In 3-D, there are different orders to color the scene Object order:
for each primitive P do for each pixel q within P do update frame buffer based on color and visibility of P at q
Image order:
for each pixel q do for each primitive P covering q do compute Ps contribution to q, outputting q when finished
Back-face removal
Assume solid objects with no 2-D plane exist Only one side of a surface will be displayed, indicated by the direction of its normal vector Remove all planes that face away from the camera Can be tested by the dot product or the normal vector and the view vector The plane is facing away if angle between the vectors are > 90 degree
y
Painters algorithm
Sort the planes in order like 2-D given a camera position Paint the furthest one first and then the closer one Problem: how to handle intersecting planes? Possible solution: break the planes into 2 smaller ones
Z-buffer algorithm
For each pixel in the display plane, assign a memory space with a large initial value (maximum depth) When an object is rendered to that pixels position, compare its depth with the stored depth The pixel will only be rendered if the value is smaller than the stored value and the stored value is updated
Scaling problem
A distance from the camera which the texture map and the object plane align must be defined When the plane is very close to the camera, the entire plane may be mapped only to 1-2 pixels in the texture Similarly when the plane is very far, the entire texture may will be scaled to a few pixels
V 1 T(u,v) v 0 1 U z u
y r P(xi,yi,zi) h x
Vertical coordinate:
v = yi/h u = / 2 = cos-1(zi/r)/ 2
Note: reflection of nearby objects can be considered as texture mapping from camera screen
Further info
Alan Watt, 3D Computer Graphics, 3rd Ed., Addison-Wesley 2000 Olin Lathrop, The Way Computer Graphics Works, John Wiley & Sons, 1997 http://www.embedinc.com/book/index.htm Ray-tracing examples: http://www.cse.buffalo.edu/pub/WWW/povray/ RadiosityReference: http://www.siggraph.org/education/materials/HyperGraph/radiosity /radiosity.htm http://freespace.virgin.net/hugo.elias/radiosity/radiosity.htm Polygonal representation and rendering: http://cis.csuohio.edu/~arndt/graphics.ppt Bezier Curves: www.moshplant.com/director/bezierwww.math.ucla.edu/~baker/java/hoefer/Bezier.htm Parametric Patches: http://www.cs.wpi.edu/~matt/courses/cs563/talks/surface/bez_surf .html