Sei sulla pagina 1di 27

Linear Algebra

R. J. Renka

Department of Computer Science & Engineering


University of North Texas

02/01/2010

R. J. Renka Linear Algebra


Introduction

For n ≥ 1, denote by Rn the Euclidean space of ordered n-tuples of


real numbers — an n-dimensional linear space (vector space). In
the study of 3-D computer graphics, we are primarily interested in
the case n = 3, but 2-D applications are also of interest, and
examples are more easily depicted in the x-y plane. When we get
to the discussion of transformations, we will use homogeneous
coordinates in which points are represented by elements of R4 .
Most operations in linear algebra are as easily understood for one
value of n as another. (The vector cross product is an exception.)
All expressions involving vectors and matrices have both a
geometric and an algebraic meaning. Euclidean geometry provides
an intuitively simple visual interpretation, and the algebra enables
us to prove theorems and implement matrix-vector operations.
Standard notation for linear algebra uses lowercase Latin letters for
vectors, uppercase letters for matrices, and Greek letters for scalars.
R. J. Renka Linear Algebra
Points and Vectors in Rn
A Euclidean vector is a geometric object that has a magnitude and
direction. It is depicted by an arrow with an initial point and a
terminal point. With a fixed Cartesian coordinate system, a point
is represented by an element of Rn . We may identify the directed
line segment from the origin to the point with the same element of
Rn . We will assume here that vectors are always anchored at the
origin and are therefore in 1-1 correspondence with points and
n-tuples of coordinates. The directed line segment from point a to
point b is then a translation of vector b-a.
Vector Addition: Vectors are added and subtracted component-
wise: (a1 , a2 , a3 ) + (b1 , b2 , b3 ) = (a1 + b1 , a2 + b2 , a3 + b3 ).
Geometrically, addition corresponds to a translation so that the
terminal point of one coincides with the initial point of the other
(either one by commutativity) — the parallelogram rule. Note that
negation reverses the sign, and subtraction is equivalent to adding
the negative. The zero vector 0 is the additive identity.
R. J. Renka Linear Algebra
Vector Addition
6
 a+b=b+a
7

 

b  
  
  
A
  A  

-a+b=b-a 
A
A  
 
A
KA A 
  A 
 A  
 A 
A   A
 a
 A   *

 A   
 A   
 
A  -
 
 
 
 



-a

R. J. Renka Linear Algebra


Linear Combination of Vectors
Scalar Multiplication is defined componentwise:
αv = (αv1 , αv2 , αv3 ) for v = (v1 , v2 , v3 ).
A linear combination of a set of k vectors v1 , v2 , . . . , vk in Rn is a
vector v in Rn obtained by scaling and adding:

v = α1 v1 + α2 v2 + . . . αk vk

for scalars αi ∈ R. The set of all such linear combinations is the


span of the vectors.
Defn A set of vectors is linearly independent if and only if the only
linear combination that results in the zero vector is the trivial one
in which all scalars are zero. Any set of n linearly independent
vectors is a basis for Rn , and k linearly independent vectors in Rn ,
k < n, span a k-dimensional subspace of Rn .
Geometrically, n linearly dependent vectors are collinear if n = 2,
coplanar if n = 3.
R. J. Renka Linear Algebra
Euclidean Norm
Defn The magnitude or length of a vector v = (v1 , v2 , v3 ) is the
Euclidean norm q
kvk ≡ v12 + v22 + v32 .
It satisfies the following three properties which define a norm in
general.
1 kvk ≥ 0, and kvk = 0 ⇔ v = 0
2 kαvk = |α|kvk for all α ∈ R, v ∈ Rn
3 ku + vk ≤ kuk + kvk for all u, v ∈ Rn
Geometrically, scaling vector v by scalar α scales the length by |α|
and reverses the direction if α is negative.
The distance between points p1 and p2 is the norm of the
difference vector kp2 − p1 k.
Defn The standard basis vectors are the axis aligned unit vectors
(length-1 vectors) e1 = (1, 0, 0), e2 = (0, 1, 0), and e3 = (0, 0, 1).
R. J. Renka Linear Algebra
Normalization
A nonzero vector v is normalized to a unit vector u by scaling it by
the reciprocal of its length:

u = (1/kvk)v = v/kvk.

By the second property of k · k,

kuk = (1/kvk)kvk = 1.

The reason for normalizing is to separate the direction from the


magnitude. A surface normal is a vector perpendicular to (and
defining) a tangent plane at a point of a surface. This normal
direction is needed to compute the direction of specular reflection
of light from a point source at the surface point. In the case of a
triangle mesh surface, the normal at a vertex is defined as a
weighted average of triangle normals, where the triangle normals
must be unit vectors. Note that norm, normal, and normalized are
three different terms.
R. J. Renka Linear Algebra
Dot Product

Defn The dot product (scalar product or inner product) of vectors


u = (u1 , u2 , u3 ) and v = (v1 , v2 , v3 ) is the scalar

hu, vi = u1 v1 + u2 v2 + u3 v3 .

It may also be denoted by u · v or, using matrix multiplication on


column vectors, by uT v. It satisfies the following properties which
define an inner product in general.
1 hv, vi ≥ 0, and hv, vi = 0 ⇔ v = 0
2 hu, vi = hv, ui for all u, v ∈ Rn
3 hαu, vi = αhu, vi and hu + v, wi = hu, wi + hv, wi ∀ u, v, w
Note that, by symmetry (property 2), linearity (property 3) applies
to both arguments, and the dot product is therefore said to be
bilinear.

R. J. Renka Linear Algebra


Dot Product Geometry

Theorem
hu, vi = kukkvk cos θ,
where θ is the angle between u and v treated as directed line
segments. Note that, since cos(0) = 1, we have the corollary

hu, ui = kuk2 ,

which, of course, follows immediately from the definitions. Also,


we can compute the angle between vectors from
 
hu, vi
θ = cos−1 .
kukkvk

Defn A pair of vectors u and v are orthogonal if hu, vi = 0.


By the theorem, nonzero vectors are orthogonal if and only if they
are perpendicular.
R. J. Renka Linear Algebra
Trigonometry

cos(θ) = a/c
c sin(θ) = b/c
b
a2 +b 2
cos2 (θ) + sin2 (θ) = c2
=1
θ
a
The double angle formulas can be obtained by equating real and
imaginary parts in the following.

e i(θ+φ) = cos(θ + φ) + i sin(θ + φ)


= e iθ e iφ = (cos(θ) + i sin(θ))(cos(φ) + i sin(φ))
= cos(θ) cos(φ) − sin(θ) sin(φ) +
i(cos(θ) sin(φ) + sin(θ) cos(φ))

R. J. Renka Linear Algebra


Sine and Cosine Graphs

Note that cos is an even function, and sin is an odd function.

R. J. Renka Linear Algebra


Dot Product Application

v









 θ
 - - u, kuk = 1
hv, uiu

The component of v in the direction of unit vector u is hv, ui =


kvk cos θ. Note that, for u = ej , the component is vj . Don’t
confuse the scalar component hv, ui with the vector hv, uiu in the
direction of u.

R. J. Renka Linear Algebra


Matrix Determinant
A square matrix A of order n is an n by n array of real numbers,
where aij denotes the element in row i, column j for i, j = 1, . . . , n.
The determinant of A can be defined recursively as follows. For
n = 2,
det(A) = a11 a22 − a12 a21 ,
and, for n > 2,

det(A) = a11 det(A1 ) − a12 det(A2 ) + . . . − (−1)n a1n det(An ),

where Aj is the order-(n-1) matrix obtained by removing row 1 and


column j from A. The signs alternate as the top row is traversed.
Some of the properties are as follows.
det(AT ) = det(A)
det(AB) = det(A) det(B)
det(I ) = 1
det(A−1 ) = 1/ det(A)
R. J. Renka Linear Algebra
Vector Cross Product
Defn The vector cross product of vectors u and v in R3 is
 
e1 e2 e3
u × v = det  u1 u2 u3 
v1 v2 v3
= e1 (u2 v3 − u3 v2 ) − e2 (u1 v3 − u3 v1 ) + e3 (u1 v2 − u2 v1 )
 
u2 v3 − u3 v2
=  u3 v1 − u1 v3  ,
u1 v2 − u2 v1
where, with an abuse of notation, we have applied the definition of
determinant to something other than a matrix. The cross product
operator has the following properties.
v × u = −(u × v)
u × (v × w) = hu, wiv − hu, viw
αu × v = α(u × v) and u × (v + w) = (u × v) + (u × w)
It follows from the above properties that the cross product is
neither commutative nor associative. It is, however, bilinear.
R. J. Renka Linear Algebra
Vector Cross Product Geometry

Theorem
u × v = kukkvk sin θ n,
where θ is the smaller angle between u and v treated as directed
line segments, and n is a unit vector in the direction orthogonal to
the plane of u and v with sense defined by the right-hand rule (in
our right-handed coordinate system).
For nonzero vectors, u × v = 0 if u and v are collinear (θ = 0 or
θ = π). Otherwise u × v has direction normal to the plane of u
and v.
The cross product can be defined for vectors in R2 by using only
the third component — a scalar:

(u × v)z = u1 v2 − u2 v1 for u, v ∈ R2 .

R. J. Renka Linear Algebra


Right-hand Rule

R. J. Renka Linear Algebra


Vector Cross Product Applications

Problem: Given a triangle with vertices v1 , v2 , and v3 in R3 , find


a unit normal vector n to the triangle.
Solution: n is computed by normalizing the vector

u = (v2 − v1 ) × (v3 − v1 )

to a unit vector:
n = u/kuk.
Any cyclic permutation of the three vertices will produce the same
vector u. This is obvious from the geometry and is easily verified
algebraically. Reversing the order, however, reverses the sign of u
and n. When computing surface normals for lighting calculations
with a triangle mesh surface, care must be taken to consistently
order the triangle vertices.

R. J. Renka Linear Algebra


Vector Cross Product Applications continued
Problem: Compute the area A of the triangle with vertices v1 , v2 ,
and v3 in R3 .
Solution: Denote the triangle base by b = k(v2 − v1 )k. Then the
height is h = k(v3 − v1 )k sin θ, where θ is the angle at vertex v1 ,
and
A = (1/2)hb
= (1/2)(k(v3 − v1 )k sin θ)k(v2 − v1 )k
= (1/2)k(v2 − v1 ) × (v3 − v1 )k
Don’t confuse the cross product (a vector) with its norm (a scalar).
The above argument illustrates a general technique for solving
linear analytic geometry problems. We draw a diagram involving
line segments and angles, express the solution in terms of lengths
of line segments and sines and cosines of angles, and then convert
the solution to norms, vector cross products (in place of sines),
and dot products (where there are cosines).
R. J. Renka Linear Algebra
Vector Cross Product Applications continued

Problem: Given points p, p1 , and p2 in R3 , find the distance


d from the point p to the line defined by p1 and p2 , assumed
to be distinct.
Solution: Let θ be the angle at p1 in the triangle defined by
the three points. Then d is the length of the side opposite p1 ,
and

d = kp − p1 k sin θ
= kp − p1 kk(p2 − p1 ) × (p − p1 )k/(kp2 − p1 kkp − p1 k)
= k(p2 − p1 ) × (p − p1 )k/kp2 − p1 k.

R. J. Renka Linear Algebra


Vector Cross Product Applications continued
Problem: Given points p, p1 , and p2 in R2 , locate p relative to
the line (pair of half-planes) defined by p1 and p2 , assumed to be
distinct.
Solution: p is in the left half-plane as viewed from p1 toward p2 if
and only if the vertices of triangle (p1 , p2 , p) are CCW-ordered; i.e.,

[(p2 − p1 ) × (p − p1 )]z ≥ 0

with the inequality made strict if the line is to be excluded from


the half-plane.
Note that three half-plane tests can be used to determine whether
or not a point is in a particular triangle. The test also serves as the
basis for an algorithm that locates a point relative to a planar
triangulation.
What is the test for p in the half-plane that has p2 − p1 as
outward normal at p1 ?
R. J. Renka Linear Algebra
Vector Identities

Identity Description
u+v=v+u Commutativity of addition
u-v=u+(-v) Definition of subtraction
(u+v)+w=u+(v+w) Associativity of addition
α(βv)=(αβ)v Associativity of scalar multiplication
α(u+v)=αu+αv Distributive property
hu, vi = hv, ui Symmetry of dot product
hαu, vi = αhu, vi Linearity of dot product
hu + v,p wi = hu, wi + hv, wi Linearity of dot product
kvk = hv, vi Definition of norm

R. J. Renka Linear Algebra


Vector Identities continued

Identity Description
kvk ≥ 0 Nonnegativity of norm
kαvk = |α| kvk Norm of scalar times vector
ku + vk ≤ kuk + kvk Triangle inequality
αv × v = 0 Collinear vectors
u × v = −(v × u) Skew-symmetry (antisymmetry)
(αu) × v = α(u × v) Linearity of cross product
(u + v) × w = u × w + v × w Linearity of cross product
u × (v × w) = hu, wiv − hu, viw Vector triple product
hu, v × wi = det(u, v, w) Rows of determinant
= det(v, w, u) = det(w, u, v)

R. J. Renka Linear Algebra


Matrices

An m by n matrix A ∈ Rm×n is an m by n array of real numbers aij


surrounded by parentheses or square brackets, for row indices
i = 1, . . . , m and column indices j = 1, . . . , n. If m = n, A is said
to be square and have order n.
The (main) diagonal of a square matrix A is the ordered n-tuple
diag(A)=(a11 , a22 , . . . , ann ). A diagonal matrix D is a square
matrix in which all off-diagonal elements are zeros. The order-n
identity matrix I = In is defined by Iij = δij , where δij is the
Kronecker delta function

1 if i = j
δij = .
0 if i 6= j

Scalar multiplication is defined componentwise: (αA)ij = αaij for


all i and j.

R. J. Renka Linear Algebra


Matrix Multiplication

Matrix multiplication is defined by


m
X
cij = aik bkj
k=1

for i = 1, . . . , l and j = 1, . . . , n, where

A ∈ Rl×m , B ∈ Rm×n ⇒ C = AB ∈ Rl×n .

Matrix multiplication is associative but not commutative:


(AB)C = A(BC ) provided the dimensions are consistent, but for
non-square matrices, AB may be well-defined while BA is not. The
identity matrix is the multiplicative identity: AIn = A and Im A = A
for A ∈ Rm×n .

R. J. Renka Linear Algebra


Matrix Multiplication continued

For the purpose of matrix multiplication, vectors are column


vectors (n by 1 matrices), and scalars are 1 by 1 matrices. Thus a
matrix-vector product Av = u is a vector, where v is necessarily in
Rn , and u is in Rm for A ∈ Rm×n . Also, αv = vα, where the first
expression involves scalar multiplication, while the second may be
interpreted as matrix multiplication. Note that, while
(AB)v = A(Bv), the order of operations can make a very
significant difference computationally.
Matrix multiplication is a linear operation:
A(αv) = αAv
A(u + v) = Au + Av

R. J. Renka Linear Algebra


Matrix Transpose

The transpose of A ∈ Rm×n , denoted AT , is the n by m matrix


obtained by interchanging rows and columns in A: (AT )ij = aji for
i = 1, . . . , m, j = 1, . . . , n.
Defn: A square matrix A is symmetric if AT = A.
A diagonal matrix is symmetric. The transpose operator has the
following properties.
(AT )T = A for any matrix A
The transpose of a product is the product of the transposed
matrices in reverse order: (AB)T = B T AT
The transpose of a column vector is a row vector (1 by n
matrix)
Note that, by our conventions, vA is ill-defined, but (Av)T =
vT AT is a well-defined row vector. Also, uT v is the scalar product,
while uvT is a rank-1 matrix.
R. J. Renka Linear Algebra
Matrix Inverse

The inverse of a square matrix A, if it exists, is a square matrix


A−1 that satisfies
A−1 A = AA−1 = I .
Either equation is sufficient to define the inverse, but a non-square
matrix may have only a left inverse or only a right inverse.
The inverse has the following properties.
(A−1 )−1 = A
(αA)−1 = (1/α)A−1
(AB)−1 = B −1 A−1
(AT )−1 = (A−1 )T = A−T

R. J. Renka Linear Algebra

Potrebbero piacerti anche