Sei sulla pagina 1di 46

Instructor: Oscar Au

3-D Objects

• Objects are virtual entities in a continuous environment

• Include points, lines, shapes in 2-D and volume in 3-D

• 3-D modeling involves representation, position, manipulation, lighting, and rendering

Co-ordinate systems

• The object is located in the World Coordinate System (WCS) which is normally a right- handed Cartesian system

• Camera defines its Camera Coordinate System (CCS)

Camera
y
left-handed
y
view-point
system
z
x
x
z
right-handed

system

Operation on OCS
• Each object has an anchor point
(origin) and all its vertices are defined
in the object coordinate system
• Matrix operation can be performed on
the OCS
Rotation by “roll” about the z-axis
⎡ x ' ⎤
cos(
roll
)
sin(
roll
)
0 ⎤ ⎡
x ⎤
y '
=
sin(
roll
)
cos(
roll
)
0
y
z ' ⎥
0
0
1
⎦ ⎢ ⎣ z

(x

y
(x

1 ,y 1 ,z 1 )

x

2 ,y 2 ,z 2 )

z

(x

y (x

z

1 ’,y 1 ’,z 1 ’)

roll x

2 ’,y 2 ’,z 2 ’)

Rotation by “yaw” about the y-axis
y
(x
1 ’,y 1 ’,z 1 ’)
⎡ x ' ⎤
cos(
yaw
)
0
sin(
yaw
) ⎤ ⎡ x ⎤
(x
2 ’,y 2 ’,z 2 ’)
x
y '
=
0
1
0
y
z ' ⎥
⎢ − sin(
yaw
)
0
cos(
yaw
)
z
z
⎦ ⎥ ⎣ ⎢
yaw
Rotation by “pitch” about the x-axis
y
pitch
(x
1 ’,y 1 ’,z 1 ’)
⎡ x ' ⎤
⎡ 1
0
0
⎤ ⎡ x ⎤
y '
=
0
cos(
pitch
)
sin(
pitch
)
y
(x
2 ’,y 2 ’,z 2 ’)
x
z ' ⎥
0
sin(
pitch
)
cos(
pitch
)
z ⎥
⎦ ⎣
z
Scaling by (S x , S y , S z )
y
⎡ x ' ⎤
⎡ S
0
0
⎤ ⎡ x ⎤
X
(x
1 ’,y 1 ’,z 1 ’)
(x
2 ’,y 2 ’,z 2 ’)
y '
=
S
0
y
⎢ 0
Y
x
z ' ⎥
0
S
⎣ ⎢ 0
⎦ ⎢ ⎣ z
Z
z
• Note: scaling should not be performed on rigid body
Translation by (t x , t y , t z )
y
(x
1 ’,y 1 ’,z 1 ’)
x ' ⎤
⎡ x ⎤
⎡ t
(x
2 ’,y 2 ’,z 2 ’)
x
y
'
=
y
+
t
y
(t
X ,t Y ,t Z )
x
z
'
z ⎥
t
z
z
• Note: translation cannot be done by matrix multiplication

General matrix operation

x

y

z

1

'

'

'

=

A

XX

YX

ZX

0

A

A

A

XY

A

A

YY

ZY

0

A

XZ

A

A

YZ

ZZ

0

T

X

T

Y

T

Z

1

⎤ ⎡ x ⎤ ⎢

z

⎦ ⎣

y

1

• Rotation, scaling and translation all by matrix multiplication

Deformation

• Cannot be done by linear matrix operation

• General principle:

(x

(x

z)

z)

x

y

z

'=

'=

'=

F

X

F

Y

F

Z

y

y

,

,

,

,

(x

,

y z)

,

• Tapering

'= xF (z)

1

x

y

'=

yF

2

(z)

z'= z

• Axis Twisting

x

y

z

=

'

' =

=

'

[

[

( )]

F z

( )]

x cos

x

z

sin

+ y cos

y

sin F z

[
F z
(
)]
[
F z
(
)]
• Bending
x ' =
x
y
'
= sin
θ
(
z
1/
k
)
+
y
0
z
'
= cos
θ
(
z
1/
k
)
+
y
0
1/k: curvature of the bending
y 0 : center of bending
θ = k(y-y 0 )

Object representation in WCS

• The place where objects are assembled

• Objects usually represented by a pointer to their origins

• A WCS can become another OCS in a new WCS

(2)
y WCS
(1)
x

Rotation and scaling in WCS

• (1) is obtained by simple rotation

• To obtain (2), either convert all points in the OCS to WCS, or calculate the necessary rotation in OCS

• Scaling can also be done in WCS or OCS (preferred)

• Hierarchy and object-oriented approach preferred

z WCS

WCS

Simple approach

• Assume the camera is located at the origin of the WCS facing the z-direction

• Assume also a projection screen is located at a distance d behind the camera on the x-y plane

y
(x 1 ,y 1 ,z 1 )
y’
d
x
x’
(x 1 ’,y 1 ’)

Note that physically, the projected image is in the left-handed coordinate system

camera

projection

screen

coordinate of projected image

z

Calculating new coordinates on the projection

• Using similar triangles

(x
1 ,y 1 ,z 1 )
y
y 1
d
z 1
y
1 ’
(x
1 ,y 1 ,z 1 )
x 1
d
z 1
x 1 ’
x

z

y 1 ’ = y 1 d/z

z

x 1 ’ = x 1 d/z

Perspective information

• The distance d of the screen serves as a scaling factor

• The new coordinates are divided by the z value of the old system. Thus the further away the object, the smaller it is on the projection screen.

• The new coordinate system needs another variable z’ = z for each point to determine the front and back relationship.

• If 2 points at different distance results in the same (x’,y’), only the one with smaller z’ is displayed.

Field of viewing

• The distance d also defines the viewing angle

y
l
θ
Viewable
region
d

z

tan(θ) = l/d

Camera position

• Camera can be considered as an object and be placed at any location in the WCS by matrix transformation

• Question: for a point (x, y, z) in the WCS, what is its new position in CCS after camera motion?

Converting points in WCS to CCS
• Assume camera is at the origin of WCS at beginning
• Camera is then moved to a new location by a series of
matrix transformation in the form of
(
⎡ A
α
,
θ φ
,
)
A
(
α θ φ
,
,
)
A
(
α θ φ
,
,
)
T
XX
XY
XZ
X
A
()
α
,
θ φ
,
A
()
α θ φ
,
,
A
()
α θ φ
,
,
T
YX
YY
YZ
Y
A
()
α
,
θ φ
,
A
()
α θ φ
,
,
A
()
α θ φ
,
,
T
ZX
ZY
ZZ
Z
0
0
0
1
• This can be viewed as WCS moved w.r.t. CCS. We can
perform the following transforms in same sequence
(
α θ
,
,
φ
)
A
(
α
,
θ
,
φ
)
A
(
α
,
θ
,
φ
)
⎡ A
T
XX
XY
XZ
X
A
(
α ,
,
θ
φ
)(
A
α
,
θ
,
φ
)(
A
α
,
θ
,
φ
)
T
YX
YY
YZ
Y
A
(
α θ
,
,
φ
)(
A
α
,
θ
,
φ
)(
A
α
,
θ
,
φ
)
T
ZX
ZY
ZZ
Z
0
0
0
1

Zooming and camera position

• Zooming is equivalent to moving the screen away from the camera lens (increasing magnification factor “d”).

• Moving camera closer to the object also results in the object enlarged.

• But they result in different distortion

camera position
zoomed
screen
Viewable
z
region
original
closeup
screen
screen

• Example:

reference image

Depth of focus

placing camera closer to object

• In real situations, objects far from the focal planes should be blurred

• simulate this by applying blurring function that depends on z’ value

zoom in object

Edge represented object

• Objects formed by lines defined by vertices • Used for quick visualization and transparent view

y
1
7
4
5
6
0
3
0
z
1
1
1
2
 Index 0 1 2 3 4 … Point (0,0,0) (1,0,0) (1,0,1) (0,0,1) (0,1,0) … Line (0,1) (0,3) (0,4) (1,2) (1,5) …

x

• Difficult to remove lines masked by other surfaces • Not very common for modeling

Planar polygons

• Formed by a chain of straight lines

 Index 0 1 2 3 5 … Vertices (0,0,0) (0,0,2) (4,0,2) (3,0,0) (0,1,2) … Polygons (1,2,5, (2,3,6, (4,5,8, (5,6,8, (0,3,6, … 4,1) 5,2) 9,4) 5) 7,0)
y
9
8
6
4
5
3
x
1
2

z

• Difficult in representing curved surface

• Points are not free to move along different axes

• Example: reducing the z-coordinate of vertex (2) destroy the planar property on polygon (0)

4

1

5
?

2

Planar triangles

• Try to resolve the point motion in planar polygons by breaking it down to triangles

 Index 0 1 2 3 5 … Vertices (0,0,0) (0,0,2) (4,0,2) (3,0,0) (0,1,2) … triangles (1,2,4) (2,4,5) (4,5,9) (5,8,9) (2,3,5) …
y
9
8
6
4
5
3
x
1
2

z

• Easier to edit object

• Still a problem: how to represent disjoint boundaries such as holes?

• Breakdown to more triangles

• Subtraction space may be needed

Boundary 1

Boundary 2

Sectional polygons

• Slice a 3-D object into cross-sections, and store each cross-section in polygon form

• Object surface mesh is reconstructed by connecting contours with those above and below

• Require a central axis to align planes

Object

Central axis

Sectional

representation

2 planes are necessary to represent a simple object like this

• Number of planes can be varied depending on the complexity of an object

• Polar coordinates with the central axis as origin provide an easy description of planner polygons

X-Section in polar coordinate

back-curving

edges

holes

X-Section with holes and back curving edges

• Still difficult to represent holes and back-curving lines

Extrusion

• One of the short-cut to overcome the tedious cross- section in building up complex models

• A 2-D cross-section is extended to create a cylinder-like 3-D object

y
result
x
y
X-Section
extrusion
direction
x

z

• Can re-define size and rotation in extrusion

z

z

Object of revolution

• Begin with a cross-section of an object and then rotate the cross-section around a central axis

• Can represent a complicated object with very few data points

• The cross-section can be off-axis to form torus-like objects

y

x

z

Constructive Solid Geometry (CSG)

• Intend to construct complicated objects from a set of simple primitives such as cube, cone, sphere etc.

• Mathematical equations used to describe geometry

e.g. sphere is given by x 2 + y 2 + z 2 = r 2

• To allow object interaction such as addition and subtraction

Mathematical construction

• A set of inequalities is used to determine if a point is inside or outside the solid boundary

points outside sphere: x 2 + y 2 + z 2 > r 2

points inside sphere: x 2 + y 2 + z 2 < r 2

• More complex geometry can be constructed by logical operations of the inequalities

e.g. points of hollow sphere with inner radius r 1 and outer radius r 2 have to satisfy both

x 2 + y 2 + z 2 > r 1 2

and

x 2 + y 2 + z 2 < r 2 2

Spatial subdivision

• Concept similar to a 3-D bitmap, where objects are divided into small cubes

• Larger cubes are identified to save storage space

More advanced models (will not be covered)

• Spine objects (you may regard it as extrusion with curved central axis)

• Fractals

• Soft objects interacting with neighboring objects

Definition of rendering

• Viewing algorithm of how objects are drawn and visualized by a viewer

• Usually objects are rendered in order

• Different algorithms to handle different representations

Wire frame or line rendering

• Simplest for rendering for quick visualization

• Only required to figure out the vertices and connect related pairs

• Overlapping/depth information need not be considered

• Sometimes difficult to understand the object

Opaque surface rendering

• Define the color of a polygon/surface and color the entire surface with the single color

• Relatively simple, except that the visual order of the surface has to be calculated

• Project each object onto the camera’s viewing screen and determine the paint color by comparing order with existing points

Complex surface rendering

• Need to consider texture, perspective, lighting and other factors

• Interaction between objects also needs to be considered

Ray tracing

•Traces light backward from eye to scene to light source

•Take into account perfect specular interaction

•Ray-traced examples:

•http://www.cse.buffalo.edu/pub/WWW/povray/

Ray tracing

•Use analogy from thermal heat diffusion to model the “energy” emitted from each surface patch

•Handle diffused interaction

Difficulty in simulating realistic illumination

• Illumination of an object consists of many sources from multiple reflection of surrounding surfaces (similar to reverberation in audio)

• It is very difficult to trace all the sources to calculate the right color and luminosity or the resulting surface

• Need to develop models to achieve trade-off between realism and efficient rendering

www.siggraph.org/education/materials/

Point light source

• The simplest source with parameters of position, color and intensity

• The illumination intensity at a certain point is calculated by inverse square law and angle of incident

(p x ,p y ,p z )
normal vector
n = [0, 1, 0]
θ
s = [p x -x, p y -y, p z –z]
Recall vector dot product
- s •n = |s||n| cos(θ)
(x,y,z)
cos(θ) = -s •n / |s|
I s r
(
n r
)
illumination at (x, y,z) =
r
where I is the intensity at the source
3
s

Spot light

• Similar to point source except that intensity falls off rapidly at location outside the cone of illumination

x ,p y ,p z )

φ
n
θ s
u
(x,y,z)
(x 0 ,y 0 ,z 0 )

• illumination can be calculated as a function of φ and distance from source

example:

Illumination

=

I

source

(

r

u n

r

)

r

u

3

cos

g

πθ

φ

4

(p

- Position: (p x ,p y ,p z )

- Direction: s

- Cone angle: φ

A point given by u is within the illumination cone if θ < φ/2

cos(θ ) =

(
s r
• u r
)
r
r
s
u

⎟ ⎠ ⎟ sharpness of the sport

Multiple light sources

• Superposition principle can be used to calculate final equalization necessary to prevent saturation

• Need a model to handle the interaction between light sources and objects

• Simple method: treat opaque objects as negative light sources to subtract out illumination

• The magnitude of the negative light source defines the transparency of the object surface to a third object

Ambient light reflection

• It refers to constant illumination naturally exists due to multiple reflections that enable us to see objects

• No light source is associated with it and independent of orientation and distance

• Simply calculated by multiplying the light source color component and the reflection coefficient

R

G

B

= I R-source x R R-object

= I G-source x R G-object

= I B-source x R B-object

Observed color

• Note that the source color and surface color is in the subtraction color space

Diffused reflection

• It simulates reflection by rough surfaces that incident lights are reflected in all directions

• Similar to ambient light reflection, perspective info due to distance and orientation w.r.t. light sources are included

• Easier to be represented using the HSV model such that the positional data can be represented by the intensity components without affecting the color

Example: surfaces are rendered by same color with different brightness

Specular reflection

• Reflection generated by polished surfaces creates a highlight of the illuminating light source and surrounding environment

• Surface brightness is the sum of its ambient reflection together with specular highlight

• Perfectly reflected light follows the law of reflection with incident angle equal to the reflected angle

• Depending on different material, some small amount of diffusion due to micro-surface roughness may occur resulting in slightly blurred surface

- specular reflection component is modeled by

R = I

source

K cos

s

g

(φ)

I source is the source intensity K S is the specular coefficient g is the gross surface factor

(p
x ,p y ,p z )
r
θ φ
(x’,y’,z’)
s θ
v
(x,y,z)

• Example: g=10 gives a rough plastic effect while g=150 gives metallic-like surface with small highlight

• Have to find φ in term of vectors in the Cartesian system calculated from relative coordinates, we have

[

2

(

I

source

K

s

r r

s

)(

r r

v

)

(

r r

v

s

)] g

R

=

n

n

Examples in surface rendering

Render order

• In 2-D, the bottom-most plane is colored first and then color is applied upward towards the top

• In 3-D, there are different orders to color the scene

• Object order:

for each primitive P do for each pixel q within P do update frame buffer based on color and visibility of P at q

• Image order:

for each pixel q do for each primitive P covering q do compute P’s contribution to q, outputting q when finished

Back-face removal

• Assume solid objects with no 2-D plane exist

• Only one side of a surface will be displayed, indicated by the direction of its normal vector

• Remove all planes that face away from the camera

• Can be tested by the dot product or the normal vector and the view vector

• The plane is facing away if angle between the vectors are > 90 degree

y
θ
back
facing
x
θ
z

forward

facing

Painter’s algorithm

• Sort the planes in order like 2-D given a camera position

• Paint the furthest one first and then the closer one

• Problem: how to handle intersecting planes?

• Possible solution: break the planes into 2 smaller ones

Z-buffer algorithm

• For each pixel in the display plane, assign a memory space with a large initial value (maximum depth)

• When an object is rendered to that pixel’s position, compare its depth with the stored depth

• The pixel will only be rendered if the value is smaller than the stored value and the stored value is updated

Bitmap surface representation

• Some surface texture can only be described by bitmap but not object

• Assume the texture is represented by a bitmap with u-v coordinate

• Can be mapped to any surface in WCS by complicated 3-D mathematical transforms

• The texture map defines the background ambient color and additional effects are added by superposition principle

Scaling problem

• A distance from the camera which the texture map and the object plane align must be defined

• When the plane is very close to the camera, the entire plane may be mapped only to 1-2 pixels in the texture

• Similarly when the plane is very far, the entire texture may will be scaled to a few pixels

Example: cylindrical mapping

• Assume a texture map with U-V coordinate is mapped to a cylinder with radius r and height h

• A point P i (x i ,y i ,x i ) will be mapped into T(u,v) in the texture space

 V 1 T(u,v) 0 1

U

• Vertical coordinate:

• Horizontal coordinate:

v = y i /h

The portion of the arc is:

y
r
h
P(x i ,y i ,z i )
v
z
θ
u

x

θ = cos -1 (z i /r)

u = θ/ = cos -1 (z i /r)/

• Note: reflection of nearby objects can be considered as texture mapping from camera screen

Further info

Alan Watt, 3D Computer Graphics, 3rd Ed., Addison-Wesley 2000 Olin Lathrop, The Way Computer Graphics Works, John Wiley & Sons, 1997 http://www.embedinc.com/book/index.htm Ray-tracing examples: