0 Voti positivi0 Voti negativi

4 visualizzazioni24 pagineDec 01, 2013

© Attribution Non-Commercial (BY-NC)

PDF, TXT o leggi online da Scribd

Attribution Non-Commercial (BY-NC)

4 visualizzazioni

Attribution Non-Commercial (BY-NC)

- The Law of Explosive Growth: Lesson 20 from The 21 Irrefutable Laws of Leadership
- Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race
- Hidden Figures Young Readers' Edition
- The E-Myth Revisited: Why Most Small Businesses Don't Work and
- Micro: A Novel
- The Wright Brothers
- The Other Einstein: A Novel
- State of Fear
- State of Fear
- The Power of Discipline: 7 Ways it Can Change Your Life
- The Kiss Quotient: A Novel
- Being Wrong: Adventures in the Margin of Error
- Algorithms to Live By: The Computer Science of Human Decisions
- The 6th Extinction
- The Black Swan
- The Art of Thinking Clearly
- The Last Battle
- Prince Caspian
- A Mind for Numbers: How to Excel at Math and Science Even If You Flunked Algebra
- The Theory of Death: A Decker/Lazarus Novel

Sei sulla pagina 1di 24

The earliest denition of a vector usually encountered is that a vector is a thing possessing

length and direction. This is the arrow in space view with length naturally being the

length of the arrow and direction being the direction the arrow is pointing. We will denote

vectors by writing their name in boldface (e.g., v and a ) or by drawing a little arrow above their

name (e.g.,

v and

a ).

Some examples of such vectors include the velocity of an object at a particular time and the

acceleration of an object at a particular time. (For now, we are not considering vector elds; that

is, our vectors will not be functions of time or position.)

Perhaps the most fundamental of vectors are those describing relative position or dis-

placement: If A and B are two points in space (i.e., positions in space), then the vector from

A to B , denoted

AB , is simply the arrow

1

starting at point A and ending at point B .

AB

gives you the direction and distance to move from position A to position B (hence the term

relative position).

We will use displacement vectors as the basic model for traditional vectors. Almost all

other traditional vectors in physics velocities, accelerations, forces, etc. are derived from

displacement vectors. For example, the velocity v of an object at a given position p is the limit

v = lim

t 0

pq

t

where q is the position of the object t time units after being at p . It is through these derivations

that the properties we derive for displacement (relative position) vectors can be shown to hold

for all the traditional vectors in physics.

By the way, throughout this discussion, we are assuming that the points in space are points

in a Euclidean space. We will discuss exactly what this means later (in another chapter). For

now, assume high school geometry. In particular:

1. Any two points can be connected by a straight line segment (totally contained in the

space) whose length is the distance between the two points.

2. The angles of each triangle add up to (or 180 degrees).

3. The laws of similar triangles hold.

4. Parallelograms are well dened.

1

ofcially,

AB is a directed line segment.

11/28/2011

Traditional Vector Theory Chapter & Page: 22

Keep in mind that there are different Euclidean spaces. Two different at planes, for example,

are two different Euclidean spaces.

?

Exercise 2.1: Give several reasons why a sphere is not a Euclidean space.

2

2.1 Fundamental Dening Concepts

Fundamental Geometrically Dened Concepts

The Two Fundamental Measurable Quantities

Our goal is to develop the fundamental theory of vectors as things that are completely dened by

length and direction, using the set of vectors describing relative position in some Euclidean

space as a basic model. Keep in mind the requirement that length and direction completely

denes a vector here. So if we have two arrows, both pointing in the same direction and of the

same length, then they are the same vector.

?

Exercise 2.2: Let A , B , C and D be points in space. How does the concept that

Parallelograms are well dened allow us to decide when

AB =

CD ?

With traditional vectors, there are two fundamental measurable quantities to work with:

Length: The length (also called the magnitude or norm) of a vector v is denoted by v or

|v| or v , depending on the whim of the author.

Angle Between Two Vectors: While it is intuitively clear what direction means, it is

difcult to quantify. What can be measured (physically, in theory) is the smallest angle

between two vectors u and v visualized with their starting points touching. We will typically

denote this angle by or (u, v) , and we will use radians or degrees to measure this angle

as seems convenient. Do note that

(u, v) = (v, u) .

It should be noted that measurable means that we can (conceptually, at least) nd these

quantities by directly measuring the length and the angle with fairly basic real-world tools.

Consequently, we automatically have

v 0 and 0 .

(We may later relax this restriction on .)

Units for Length and Types of Vectors

In many basic developments of traditional vector theory, the length of a vector, v , is treated as

a simple, unadorned real number. In fact, however, any vector length measurement is inherently

2

Terminology: A sphere is just the surface of a ball. It does not contain the inside of the ball.

version: 11/28/2011

Traditional Vector Theory Chapter & Page: 23

in terms of some measurement unit such as feet, meters, miles/hour, meters/second

2

, newtons,

etc. Even if we dont explicitly state the unit of length for a vector we sketch on paper, we

must have some notion of what one unit of length is, and that unnamed quantity is our basic

unit of length give it a name if you wish.

Whats more, we need to realize that there are different types of vectors according to what

is being measured. For any vector describing displacement fromone position to another, length

really means distance in the normal sense, and can be measured in feet, meters, furlongs, and

so forth. This is one type of vector. On the other hand, the length of a vector describing the

velocity of an object at a given time is really the speed of that object, and can be measured in

feet/second, meters/hour, furlongs/fortnight, and so on. The set of velocity vectors make up

another type of vector. And we also have acceleration vectors, force vectors, ux vectors,

. . . .

Subsequent Geometric Concepts

All of our traditional vector computations resulting in scalar values will be based on length

and angle . Note that, right off, we can use the length to dene one very special vector and a

set of vectors that we may nd useful:

1. The zero vector, denoted by 0 or

there are innitely many zero vectors, each corresponding to a different direction, but

that is being silly. The zero vector, in our model of relative position, corresponds to just

not moving from a point. For that and other practical reasons, we take the position that

there is only one zero vector in each set of vectors

2. A unit vector is any vector

v of length one (i.e.,

_

_

v

_

_

= 1 ). If it is signicant that

this is a unit vector, then the vector may be denoted by v or v , instead of

v . We will

nd various uses for unit vectors.

And with the concept of angles between vectors, we can introduce terminology for when

vectors point in special directions relative to each other. Most of this terminology is what we use

everyday.

Parallelness: Two vectors u and v are said to be parallel if either they point in the same

direction or they point in opposite directions. Obviously,

u and v point in the same direction (u, v) = 0

and

u and v point in opposite directions (u, v) = .

Orthogonality:

(a) A pair of vectors {u, v} is said to be orthogonal

Either (u, v) =

2

or u = 0 or v = 0 .

(Thus, weve dened orthogonality so that 0 is automatically orthogonal to every

vector. The reasons we include this in our denition is simply that it will simplify

statements and formulas later.)

On occasion, we will indicate that {u, v} is orthogonal by writing u v .

version: 11/28/2011

Traditional Vector Theory Chapter & Page: 24

u

u

v

v

u

+

v

Figure 2.1: The vector sum, based on the well-dened parallelogram having u and v as

two sides.

(b) A set of vectors {v

1

, v

2

, . . . , v

K

} is said to be orthogonal

Every distinct pair in the set is orthogonal.

With both length and angle we can dene

Orthonormality: A set of vectors {v

1

, v

2

, . . . , v

K

} is said to be orthonormal

It is an orthogonal set of unit vectors.

Using orthogonal or orthonormal sets of vectors will greatly simplify life.

Fundamental Geometrically Dened Algebraic Operations

There are two (to begin with):

Scalar Multiplication of a Vector: Let be a scalar

3

and let v be a vector. The product of

with v , denoted v (or v ), is dened as follows:

If > 0 , then v is the vector of length v pointing in the same direction as v .

If < 0 , then v is the vector of length || v pointing in the direction opposite of v .

If = 0 , then v = 0

Along these lines, we automatically have that the division of a vector v by a nonzero scalar

is given by

v

=

1

v .

Vector Addition: Within our model of vectors representing relative position, the addition

of

AB to

BC naturally corresponds to go from position A to position B and then go

from position B to position C . The result, of course, is that youve gone from position A

to position C . So we dene the sum of

AB and

BC to be

AC , and write

AB +

BC =

AC .

Geometrically, this is exactly the same as dening the sum of two vectors u and v using a

parallelogram as in gure 2.1.

3

Until otherwise announced, assume all scalars are real numbers.

version: 11/28/2011

Traditional Vector Theory Chapter & Page: 25

From the gure (and our assumptions about Euclideaness), it should be obvious that

u + v = v + u .

Other properties of addition and scalar multiplication should also be obvious or easily veried.

?

Exercise 2.3: Using just the denitions, assumptions, pictures, and high school geometry,

convince yourself that

(u +v) = u + v and ( +)v = v + v

for arbitrary vectors u and v , and arbitrary scalars and .

4

?

Exercise 2.4: Convince yourself that vector addition is associative, i.e., that, for any three

vectors u , v and w,

(u +v) + w = u + (v +w) .

at least when the vectors are dened in terms of relative position, as above.

Keep in mind that vector addition requires that the vectors be of the same type. Adding a

velocity vector to an acceleration vector makes no sense.

Decomposing Vectors, Part I (Length and Direction)

Let v be a nonzero (traditional) vector. Corresponding to v is the corresponding unit vector

u in the direction of v is given by

u =

v

v

.

Note that

u =

_

_

_

_

v

v

_

_

_

_

=

_

_

_

_

1

v

v

_

_

_

_

=

1

v

v = 1 ,

so, as the terminology suggests, it really is a unit vector. Also,

v = 1 v =

v

v

v = v

v

v

= v u .

Thus, we can decompose (or resolve) a nonzero vector v into the product of its length times

the unit vector in the direction of v . This will be useful in a number of situations.

2.2 Traditional Vector Spaces

Using the four geometric concepts discussed above (length, angle between vectors, scalar multi-

plication and vector addition), we can now develop the rest of the theory for traditional vectors.

And as any good mathematician can tell you, half the work is coming up with the right denitions.

By the way, please note two things:

4

Really convince yourself. Dont just say, Oh yeah, I know its true. Be able to convince skeptics.

version: 11/28/2011

Traditional Vector Theory Chapter & Page: 26

1. At this point, our theory is completely component free we are viewing traditional

vectors the way God does. All those component formulas youve mucked about with for

years have yet to be derived.

2. Much of the terminology and development that follows will apply to more general, non-

traditional sets of vectors. We will use this fact when we generalize our results.

I should also mention that K , N and M will denote arbitrary xed positive integers

throughout the rest of this chapter (unless otherwise indicated).

Linear Combinations and Related Notions

Linear Combinations and Span

A linear combination of vectors v

1

, v

2

, . . . and v

N

is any expression of the form

1

v

1

+

2

v

2

+ +

N

v

N

where

1

,

2

, and

n

are scalars. It is normally assumed that the number of terms in a linear

combination ( N in the above) is nite. However, when we get to more general vector spaces,

especially the vector spaces that will be useful in solving partial differential equations, we will

start allowing linear combinations with innitely many terms (we will then also have worry about

convergence issues).

Observe that the subtraction of one vector from another is just a particularly simple linear

combination,

u v = (1)u + (1)v .

The following should be obvious:

u u = 0

and

w = u v w + v = u

?

Exercise 2.5: Draw two vectors u and v with their starting points touching. (Make

them of different lengths and pointing in different directions!) On this gure, sketch

u + v , u v , v u and 2u + 4v .

Some more terminology: Given a set of vectors

{v

1

, v

2

, v

3

, . . .}

(not necessarily nite), the span of this set is the set of all (nite) linear combinations of these

v

k

s .

Linear (In)dependence

We will be interested in minimal sets of vectors for generating particular collections of linear

combinations. To label when such a set

{ v

1

, v

2

, v

3

, . . . , v

N

}

version: 11/28/2011

Traditional Vector Theory Chapter & Page: 27

is or is not minimal in this sense, we have the following terminology:

{ v

1

, v

2

, v

3

, . . . , v

N

} is linearly dependent

One of the v

k

s can be expressed as a linear combination of the others.

and

{ v

1

, v

2

, v

3

, . . . , v

N

} is linearly independent

{ v

1

, v

2

, v

3

, . . . , v

N

} is not linearly dependent

None of the v

k

s can be expressed as a linear combination of the others.

Our interest will often be in linearly independent sets. They will be the minimal sets for

constructing linear combinations.

?

Exercise 2.6: Why can the zero vector, 0 , never be in a linearly independent set of vectors.

Alternative denitions for linear dependence and independence (which should really be

considered as tests for linear dependence and independence) are described in the next exercises.

?

a: { v

1

, v

2

, . . . , v

N

} is linearly dependent (as dened above) if and only if there are scalars

1

,

2

,

3

, . . . and

N

not all zero such that

1

v

1

+

2

v

2

+ +

N

v

N

= 0 .

b: { v

1

, v

2

, . . . , v

N

} is linearly independent (as dened above) if and only if the only choice

for scalars

1

,

2

,

3

, . . . and

N

such that

1

v

1

+

2

v

2

+ +

N

v

N

= 0

is

1

=

2

= =

N

= 0 .

?

Exercise 2.8: Come up with at least one alternative denition/test for the linear indepen-

dence of { v

1

, v

2

, . . . , v

N

} based on the span of this set and subsets of this set.

Vector Spaces

Let V be some (big) set of traditional vectors (all are the same type so they can be added

together). Then this set is called a vector space (of traditional vectors) if and only if V is

closed with respect to linear combinations. And what does V is closed with respect to

linear combinations mean? It means that every linear combination of vectors in V is another

vector in V . That is, a set V of vectors is a vector space if and only if, for each

N , v

1

, v

2

, . . . , v

N

V and

1

, ,

2

, . . . ,

N

,

we have

1

v

1

+

2

v

2

+ +

N

v

N

V .

version: 11/28/2011

Traditional Vector Theory Chapter & Page: 28

For example, all the displacement vectors generated from points in a single plane is a vector

space.

Note that the span of a set of vectors is automatically a vector space. It is often helpful to

nd minimal sets of vectors that span a given vector space V . Such a set is called a basis for

V . That is, a basis for a vector space V is any (nite) set of vectors

B = { b

1

, b

2

, . . . , b

N

}

(all from V ) that satises both of the following:

1. {b

1

, b

2

, . . . , b

N

} is linearly independent .

2. Each vector in V is a linear combination of vectors from B. That is, if v is any vector

in V , then there is a corresponding ordered set of N scalars (v

1

, v

2

, . . . , v

N

) such that

v = v

1

b

1

+ v

2

b

2

+ + v

N

b

N

.

The v

k

s just describedare calledthe components

5

of v withrespect tothe basis {b

1

, b

2

, . . . , b

N

} .

The following three statements are all easily veried. In them

B = { b

1

, b

2

, . . . , b

N

}

is a basis for some vector space V , v is some vector in V , and (v

1

, v

2

, . . . , v

N

) are the

components of v with respect to B.

1. The v

k

s are unique. That is, given any set of N scalars {

1

,

2

, . . . ,

N

} , then

1

b

1

+

2

b

2

+ +

N

b

N

= v = v

1

b

1

+ v

2

b

2

+ + v

N

b

N

if and only if

1

= v

1

,

2

= v

2

, . . . and

N

= v

N

.

2. Any set of more than N vectors in V cannot be linearly independent.

3. Any set of fewer than N vectors in V cannot span all of V (i.e., there will be vectors

in V that cannot be expressd as linear combinations of vectors from that set).

The last two facts tell us that a set of vectors can be a basis for V only if it is a linearly independent

set of exactly N vectors. On the other hand, if we have a linearly independent set of N vectors

{v

1

, v

2

, . . . , v

N

} , and w is another vector in V , then w must be a linear combination of the

v

k

s otherwise {w, v

1

, v

2

, . . . , v

N

} would be a linearly independent set of more than N

vectors, contrary to fact 2, above. Consequently, we must have

4. A set of vectors in V is a basis for V if and only if it is a linearly independent set of

exactly N vectors.

The number N of vectors in any basis of a vector space V is called the dimension of that vector

space. In applications with traditional vectors (relative positions, velocities, etc.), the dimension

is nite, usually 3 (or 2 if we want to simplify things).

It is often convenient to be able to switch from one basis to another. We will discuss this in

great detail later. For now:

5

NOT the coordinates of v ! Vectors do NOT have coordinates.

version: 11/28/2011

Traditional Vector Theory Chapter & Page: 29

?

1

, b

2

} , and let

c

1

= 2b

1

+ 3b

2

and c

2

= b

1

b

2

.

a: Is C = {c

1

, c

2

} a basis for V . (Give good reasons for your answer, based on the above

facts.)

b: Suppose v = 5b

1

+ 4b

2

.

Find v in terms of c

1

and c

2

. In other words, nd scalars a and b so that

v = ac

1

+ bc

2

.

What are the components of v with respect to B?

What are the components of v with respect to C ?

c: Find b

1

and b

2

in terms of c

1

and c

2

.

d: Suppose w = 5c

1

+ 4c

2

.

Find the components of w with respect to B.

Find the components of w with respect to C .

Let (v

1

, v

2

, . . . , v

N

) be the N-tuple of components of a vector v with respect to some basis

B = {b

1

, b

2

, . . . , b

N

} . In computations, we often substitute (v

1

, v

2

, . . . , v

N

) for v , sometimes

even writing

v = (v

1

, v

2

, . . . , v

N

) .

This is sloppy. Strictly speaking, it is even wrong. v is a vector (relative position, velocity, etc.)

while (v

1

, v

2

, . . . , v

N

) is an element of

N

. The two are not the same thing, and identifying

the two is safe only under the simplest of circumstances. It is certainly not a good idea when

there are two different bases at hand. So dont write things like

v = (v

1

, v

2

, . . . , v

N

)

unless you keep reminding yourself that this is really shorthand for

v = v

1

b

2

+ v

2

b

2

+ + v

N

b

N

.

In other words, (v

1

, v

2

, . . . , v

N

) should be viewed as a convenient description of v with respect

to a given basis B. Change the basis, and the corresponding representation for v can change

dramatically. (So (v

1

, v

2

, . . . , v

N

) is a basis-dependent description of v .)

Still, this basis-dependent description can be useful, and is used in basis-independent for-

mulas. This is illustrated in the next exercise, in which you conrm your favorite formulas for

vector addition and scalar multiplication.

?

1

, b

2

, . . . , b

N

} .

Let and be any two scalars and conrm that the components of v +w are

( v

1

+w

1

, v

2

+w

2

, . . . , v

N

+w

N

)

where

(v

1

, v

2

, . . . , v

N

) and (w

1

, w

2

, . . . , w

N

)

are, respectively, the components of v and w with respect to B.

version: 11/28/2011

Traditional Vector Theory Chapter & Page: 210

2.3 The Dot Product

Geometric Denition

Now we combine the fundamental geometric notions of the length v of a vector v and the

angle (v, w) between vectors v and w to dene the classic dot product. Throughout this

section, all vectors are assumed to be from a single traditional vector space V .

The dot (or scalar or inner) product of two vectors v and w, denoted by either

v w or v | w ,

is the scalar given by

v w = v | w = v w cos() where = (v, w) .

The dot notation, v w is more traditional, but the bracket notation v | w corresponds to

notation we will later use for generalizations of the dot product.

6

Of course, we must observe that

v w = w v , v v = v

2

and cos() =

v w

v w

.

(Note: If you ever see v

2

, it usually means v v , i.e., v

2

.)

?

Exercise 2.11: Let be any scalar, and v and w any two vectors. Show that factors

out of the dot product, i.e., that

(v) w = (v w) .

Be sure to consider all three cases: > 0 , = 0 and < 0 (a picture may help with the

< 0 case). Do NOT use the component formula for the dot product!

?

Exercise 2.12: Using the result of the last exercise, show that two scalars and must

be the same if

v = v

for some nonzero vector v .

We can now express our denitions for orthogonality and orthonormality in terms of the dot

product:

Orthogonality:

(a) A pair of vectors {v, w} is said to be orthogonal

either v = 0 or w = 0 or the angle between v and w is

/

2

v w = 0 .

6

In the language of tensor analysis, the dot product is also called the metric on the vector space V . This confuses

some mathematicians because, for them, the metric on the vector space V means something else; namely, the

function given by (v, u) = v u .

version: 11/28/2011

Traditional Vector Theory Chapter & Page: 211

(b) A set of vectors {v

1

, v

2

, . . . , v

K

} is said to be orthogonal

every distinct pair in the set is orthogonal

v

i

v

j

=

_

0 if i = j

v

i

2

if i = j

.

Orthonormality: A set of vectors {v

1

, v

2

, . . . , v

K

} is said to be orthonormal

it is an orthogonal set of unit vectors

v

i

v

j

=

_

0 if i = j

1 if i = j

.

Using orthogonal or orthonormal bases will greatly simplify life.

At this point, let me introduce a commonly used symbol in mathematical physics: For any

two integers i and j , the corresponding Kronecker delta,

i j

, is dened by

i j

=

_

0 if i = j

1 if i = j

.

Using the Kronecker delta, we can dene set of vectors {v

1

, v

2

, . . . , v

K

} to be orthonormal if

and only if

v

i

v

j

=

i j

.

(Okay, inventing this notation just to simplify the denition of orthonormality was silly. But the

Kronecker delta will simplify notation elsewhere, too.)

?

1

, v

2

, . . . , v

K

} be any nite set of nonzero vectors. Show that

S is orthogonal. S is linearly independent.

Note that it then follows that

S is linearly dependent. S is not orthogonal.

Is it true that a linearly independent set must be orthogonal? If so, explain why; if not, give

an example of a linearly independent set that is not orthogonal.

Projections and Linearity of the Dot Product

Decomposing Vectors, Part II (Projections)

Let v and a be two vectors, with a , at least, being nonzero. By elementary geometry, we can

write v as the sum of two vectors

v = A + B

where A is parallel to a , and B is orthogonal to a (see gure 2.2). The vector parallel to a

will be denoted by

pr

a

(v) (instead of A), and will be called either the projection of v onto a

or the (vector) component of v in the direction of a (depending on the whim of the speaker).

version: 11/28/2011

Traditional Vector Theory Chapter & Page: 212

a a

v

v

A =

pr

a

(v)

A =

pr

a

(v)

B =

or

a

(v)

(a) (b)

Figure 2.2: The projection of v onto a , and the projection of v orthogonal to a .

This is the piece of v we will usually be most interested in. The other vector component, B,

doesnt have a name or notation generally agreed upon. We will denote it by

or

a

(v) , and will

call it either the projection of v orthogonal to a or the (vector) component of v orthogonal to a

(depending on the whim of the speaker). So, using the direction given by a as a base direction,

we have now decomposed v into a pair of vectors

v =

pr

a

(v) +

or

a

(v)

where

pr

a

(v) is parallel to a and

or

a

(v) is orthogonal to a

To get a useful formula for

pr

a

(v) , it helps to introduce the scalar component of v in the

direction of a , which, for convenience, we will briey denote by

a

(v) and dene by

a

(v) =

_ _

_

pr

a

(v)

_

_

if

pr

a

(v) points in the direction of a

_

_

pr

a

(v)

_

_

if

pr

a

(v) points in the direction opposite of a

.

Thus,

pr

a

(v) =

a

(v) the unit vector in the direction of a .

Using elementary trigonometry, you can easily derive a simple formula for this scalar component

in terms of , and then rewrite that formula in terms v a and a . Combining that with the

last equation above then leads to the standard dot product formula for the vector projection. The

details are left to you:

?

Exercise 2.14: Let a and v be as above. Show, using trigonometry and the geometric

denition of the dot product from page 210, that

a

(v) =

v a

a

,

and then that

pr

a

(v) =

v a

a

2

a .

(Be sure to consider both cases.)

Linearity and the Component Formulas for the Dot Product

To derive the standard component formula for the dot product, you start by verifying some simple

linearity equations for projections and the dot product in two dimensions:

version: 11/28/2011

Traditional Vector Theory Chapter & Page: 213

?

Exercise 2.15: Let v , w and a be three vectors in a plane, with at least a being

nonzero, and do the following (without using the well-known component formula for the dot

product):

a: Draw a gure with a and v starting at the same point. Add w starting a the end of v

and draw in the sum v +w from the start of v to the end of w. Using this gure, convince

yourself that

pr

a

(v +w) =

pr

a

(v) +

pr

a

(w) .

b: Now, using what you just found (along with the results from exercise 2.14), show that

(v +w) a = v a + w a .

With a little ability to visualize vectors in three dimensions, you should be able to see that

the rst part holds when a , v and w are not all in one plane (no matter what dimension our

space of traditional vectors is). Consequently, the result of the second part,

(v +w) a = v a + w a ,

holds for any three vectors a , v and w (in the same vector space, of course). By the commu-

tativity of the dot product, we also then have

a (v +w) = a v + a w ,

from which it follows that, with four vectors,

(a +b) (v +w) = a (v +w) + b (v +w)

= a v + a w + b v + b w .

And, of course, similar expansions can be done with even more vectors. This now allows

you to derive the various component formulas for the dot product, including the one most of you

so love.

?

B = { b

1

, b

2

, . . . , b

N

}

is a basis for a traditional vector space, and let

( v

1

, v

2

, . . . , v

N

) and ( w

1

, w

2

, . . . , w

N

)

be the components with respect to B of vectors v and w, respectively. (So,

v = v

1

b

1

+ v

2

b

2

+ . . . + v

N

b

N

=

N

j =1

v

j

b

j

and

w = w

1

b

1

+ w

2

b

2

+ . . . + w

N

b

N

=

N

k=1

w

k

b

k

.)

version: 11/28/2011

Traditional Vector Theory Chapter & Page: 214

a: Show that

v w =

N

j =1

N

k=1

j k

v

j

w

k

where each

j k

is the constant b

j

b

k

.

b: To what does the formula for v w reduce when the basis B is orthogonal?

c: To what does the formula for v w reduce when the basis B is orthonormal?

The answer to the very last part of the last exercise is, of course,

v w =

N

j =1

v

j

w

j

.

Remember this only holds when the basis being used is orthonormal.

Because of the relation between the dot product and norms of vectors, you should be able

to quickly do the corresponding exercise for obtaining the component formula(s) for v :

?

B = { b

1

, b

2

, . . . , b

N

}

is a basis for a traditional vector space, and let

( v

1

, v

2

, . . . , v

N

)

be the components with respect to B of a vector v in this space.

a: Show that

v =

_

N

j =1

N

k=1

j k

v

j

v

k

where the

j k

s are constants computable fromthe basis vectors. Be sure to give the formula

for computing these constants.

b: To what does the formula for v reduce when the basis B is orthogonal?

c: To what does the formula for v reduce when the basis B is orthonormal?

And, of course, the answer to the last part of the last exercise is

v =

_

N

j =1

v

j

2

.

At this point, we can also verify a simple dot product test for equality, which, oddly enough,

will be useful when discussing the cross product in the next section.

version: 11/28/2011

Traditional Vector Theory Chapter & Page: 215

?

a: Suppose D is a vector such that

D c = 0 for every c .

Show that D = 0 . (Hint: What if we choose c = D?)

b: Suppose A and B are two vectors such that

A c = B c for every c .

Show that A = B. (Hint: What about D = A B?)

Components and the Dot Product

We need to be a little careful about using the word components. If we have a vector v and a

basis

B = { b

1

, b

2

, . . . , b

N

} ,

then v will have components (v

1

, v

2

, . . . , v

N

) with respect to the basis. These are the scalars

such that

v = v

1

b

1

+ v

2

b

2

+ + v

N

b

N

.

Vector v will also have scalar and vector components with respect to each b

j

given by

b

j

(v) =

v b

j

_

_

b

j

_

_

and

pr

b

j

(v) =

v b

j

b

j

2

b

j

,

respectively. As you can easily verify, these yield slightly different things when the basis is not

orthonormal.

?

1

, v

2

, . . . , v

N

) be the components of a vector v with respect to some

arbitrary basis

B = { b

1

, b

2

, . . . , b

N

} .

a i: Find, in terms of the v

j

s , the formulas for the vector and scalar components of v with

respect to each b

k

.

ii: How, then, would you nd the v

j

s if you knew the vector and scalar components of

v with respect to each b

k

.

b: Show that, if B is orthogonal, then

v

j

=

v b

j

_

_

b

j

_

_

2

.

(A more general version of this which will look almost the same will be important in

solving many partial differential equation problems later.)

c: Show that, if B is orthonormal, then

v

j

= v b

j

.

version: 11/28/2011

Traditional Vector Theory Chapter & Page: 216

2.4 The Cross Product

The dot product can be dened whatever the dimension of our traditional vector space is, and can

(and will) be generalized to nontraditional vector spaces. On the other hand, the cross product is

essentially an operation requiring that we have a three-dimensional space of traditional vectors.

So, in the following, assume that the vector space is just such a space.

Geometric Denition

Given two vectors v and w, the cross product vw is dened to be the vector with the following

length and direction:

Length: v w = v w sin() where = (v, w) .

Direction: If v w = 0 , then we automatically have vw = 0 . Otherwise the direction

of v w is the one direction orthogonal to both v and w given by the classic right-hand

rule (which I will not attempt to sketch for you here look in your old Intro to Physics or

old Calculus text.)

There are a number of properties that are relatively easily veried from this denition:

1. The cross product is anticommutative. That is,

w v = v w .

2. Assuming v and w are nonzero, then

v w = 0 v and w are parallel .

In particular, we always have

v v = 0 .

3. If {v, w} is an orthonormal pair of vectors, then {v, w, v w} is an orthnormal set.

4. Scalars factor out. That is, for any scalar ,

( v) w = (v w) = v ( w) .

5. The cross product is not associative. That is, in general,

(u v) w = u (v w) .

6. The cross product is related to the dot product via

v w

2

= v

2

w

2

(v w)

2

.

(This follows from the denitions and an elementary trigonometric identity.)

?

Exercise 2.20: Explain (to yourself, at least) why each of the previous statements is true.

version: 11/28/2011

Traditional Vector Theory Chapter & Page: 217

a a

b

b

c

a b (or a b)

h

h

(a) (b)

Figure 2.3: (a) The parallelogram generated by a and b. (b) The parallelepiped generated

by a , b and c .

Area and Volume and the Cross Product

When we talk about the parallelogram or parallelepiped generated by two or three vectors a ,

b and c , we are talking about the two- or three-dimensional objects with at/straight sides

sketched in gure 2.3.

Area of a Parallelogram

Look at the parallelogram generated by vectors a and b in gure 2.3a. Clearly, the length of

the base of this rectangle is a , and, from basic trigonometry, we know the height h of this

parallelogram is

h = b sin() .

So, applying a little elementary geometry, we have

area of the parallelogram =

_

length of base

_ _

height

_

= a b sin()

= a b .

(Aside from the obvious uses of this fact to nd areas of parallelograms oating in space, this

fact can be used in nding the formula for the differential element of surface area using any

coordinate system.)

By the way, the vector a b is sometimes called the vector area of the parallelogram.

(The concept of vector area can be relevant in computations of certain surface integrals.)

Volume of a Parallelepiped

Now look at the parallelepiped generated by vectors a , b and c in gure 2.3b (the squished

box with every side being a parallelogram). Recall (from high school) that the volume of this

box is the area of its base times its height h as measured from bottom to top along a line

perpendicular to the bottom plane. From above we have that

area of the base = area of parallelogram generated by a and b = a b .

version: 11/28/2011

Traditional Vector Theory Chapter & Page: 218

Since a b is orthogonal to both a and b, it is perpendicular to the plane containing a and b,

and, so, gives the direction for a line perpendicular to the bottom. Consequently, (see the gure)

the height h is simply the magnitude of the projection of vector c onto a b. Using either the

projection formulas from before or simple trigonometry, we have

h = c cos()

where is the angle between a b and c . With this and the geometric denition of the dot

product, we then get

volume of parallelepiped =

_

area of base

__

height

_

= a b c cos()

= |(a b) c| .

The expression (ab) c is called a triple product because it has three factors (even though

there are only two multiplies). Other triple products will be briey discussed later. For now,

however, observe that we have a certain symmetry in play here: It makes no difference which two

of the three vectors a , b and c are used to generate the bottom. Consequently, we automatically

have the identity

volume of parallelepiped = |(a b) c| = |a (b c)| = |(a c) b| .

In the next subsection, we will expand upon this observation, and then use it to derive another

fundamental identity which, in turn, will allow us to derive the component formula for the cross

product.

(By the way, we may later use the formulas just derived for area and volume to develop the

formulas for the integral elements of area and volume, d A and dV , using any coordinate system

inanyEuclideanor nonEuclideanspace. Withluck, we will evendiscuss ndinghyper-volumes

of arbitrary N-dimensional hyper-parallelepipeds, and use those formulas in multidimensional

integration in arbitrary coordinate systems. All that, however, goes beyond our traditional vector

theory.)

Linearity of the Cross Product

We start with an exercise:

?

Exercise 2.21: We just saw that, given three vectors a , b and c , then

|(a b) c| = |a (b c)| .

Thus, either

(a b) c = a (b c) or (a b) c = a (b c) . (2.1)

a: Nowassume that a , b and c are oriented so that the angle between each pair is no greater

than

/

2

( 90

yourself that, of the two equations in (2.1), it is

(a b) c = a (b c)

which holds.

version: 11/28/2011

Traditional Vector Theory Chapter & Page: 219

b: Now try different orientations for a , b and c and convince yourself that, no matter how

they are oriented with respect to each other,

(a b) c = a (b c) .

(See if you can come up with every possible orientation! Then you have a proof of this

identity!)

This last exercise demonstrates that, given any three vectors a , b and c (in a three-

dimensional traditional vector space), then

(a b) c = a (b c) . (2.2)

?

of traditional vectors. Using identity (2.2) and a result concerning the dot product (look at

exercise 2.15), show that

_

(v +w) b

_

c =

_

(v b) + (w b)

_

c .

(Do NOT assume (v +w) b = (v b) + (wb) ; thats what we are working to derive!)

Keep in mind that the equation derived in the last exercise holds for every possible c . If you

think about it (or recall exercise 2.18 on page 215), you will realize that could only mean that

(v +w) b = (v b) + (w b)

holds (this is why you did exercise 2.18). By the anticommutativity of the cross product, we also

have

b (v +w) = (v +w) b =

_

(v b) + (w b)

_

= (b v) + (b w) .

And fromthese identities (the distributive properties of the cross product), along with a fewmore

basic properties, you can derive a most general component formula for the cross product.

?

1

, b

2

, b

3

} be a basis for a three-dimensional space of traditional

vectors, and let

( v

1

, v

2

, v

3

) and ( w

1

, w

2

, w

3

)

be the components with respect to B of two vectors v and w. Show that

v w = A(b

1

b

2

) + B(b

1

b

3

) + C(b

2

b

3

)

where A , B and C are formulas of the v

j

s and w

k

s . Find those formulas.

A clever choice of basis, of course, can lead to a nicer component formula for the cross

product.

version: 11/28/2011

Traditional Vector Theory Chapter & Page: 220

Right-Handed Bases and the Classical Formula for the Cross

Product

A right-handed basis is an (ordered) orthogonal basis B = {b

1

, b

2

, b

3

} for a three-dimensional

traditional vector space in which b

3

points in the same direction as b

1

b

2

. In practice, we

typically restrict ourselves to orthonormal right-handed bases, in which case the requirement that

b

3

points in the same direction as b

1

b

2

simplies to

b

3

= b

1

b

2

.

Typically, also, orthonormal right-handed bases are denoted by either

{ e

1

, e

2

, e

3

} or { i, j, k } or { x, y, z } ,

or

{ e

1

, e

2

, e

3

} or {

i ,

j ,

k } or { x, y, z } ,

or even

{ e

1

, e

2

, e

3

} or {

i,

j,

k } or { x, y, z, } .

In the future, if a basis is referred to as a standard basis for a three-dimensional vector space

of traditional vectors, assume it is a right-handed orthonormal basis.

So assume { i, j, k } is a standard basis. By the above denition, k = i j . With a little

thought and a few pictures, you can easily conrm that

j k = i and k i = j .

Combining this with the formula you derived for v w (in exercise 2.23) yields the standard

component formula

v w =

_

v

2

w

3

v

3

w

2

_

i

_

v

1

w

3

v

3

w

1

_

j +

_

v

1

w

2

v

2

w

1

_

k

where, of course, (v

1

, v

2

, v

3

) and (w

1

, w

2

, w

3

) are the components of v and w with respect

to the given standard basis. Fortunately for those who dislike memorizing formulas with indices

in which order is important, this formula can also be written as the determinant of a more easily

remembered matrix,

v w = det

_

_

i j k

v

1

v

2

v

3

w

1

w

2

w

3

_

_

.

2.5 Triple Products

Any product of three vectors is called a triple product. If you think about it, you will realize that

the only legitimate triple products are of the form

(a b) c , (a b) c or a (b c) .

The rst one has already been discussed because of its relation to the volume of a parallelepiped

and a distributive law for the cross product. Since we already know that the cross product is not

version: 11/28/2011

Traditional Vector Theory Chapter & Page: 221

associative, we known that the other two triple products above are not equal. However, we do

have

(a b) c = c (a b) ,

so any identity for one can easily be converted to an identity for the other.

One famous identity is the bac-cab rule,

a (b c) = b(a c) c(a b) .

This is signicant for at least three reasons:

1. It explicitly shows that a (b c) is a linear combination of b and c . (In fact, a good

derivation of the rule starts with the geometric derivation of this fact.)

2. It can be used to reduce the rather tedious computations of the triple cross product to a

much simpler set of computations with two dot products.

3. It has a neat name.

Arkin and Webers text has a reasonably good derivation of the bac-cab rule.

7

?

?

Exercise 2.25: Three vectors A, B and C are given in problem 1.5.10 on page 31 of our

text. Using these vectors:

a: Compute A(B C) just using the standard formula for the cross product to compute

both cross products.

b: Compute A (B C) using the bac-cab rule (or, in this case, the BAC-CAB rule).

c: Which way of computing A (B C) was easier?

2.6 Reciprocal Bases

B = { b

1

, b

2

, b

3

, . . . , b

N

} .

The corresponding reciprocal basis is the set of vectors

B

= { b

1

, b

2

, b

3

, . . . , b

N

}

such that

b

j

b

k

=

j k

for all j and k .

7

In the sixth edition, this derivation starts at the bottom of page 27 and goes to the middle of page 29.

In traditional vector theory, the idea of reciprocal bases is mainly of just academic interest. However, this idea

will re-emerge when we develop tensor theory. In fact, it is this idea that actually leads to the distinction between

covariance and contravariance in tensor descriptions.

version: 11/28/2011

Traditional Vector Theory Chapter & Page: 222

In other words, for each index j , b

j

is the vector orthogonal to all the b

k

s except b

j

that also

satises

b

j

b

j

= 1 .

It is not hard to show that B

It is also easy to verify that the reciprocal basis of B

to refer to B and B

As the following exercises show, there isnt much to nding the reciprocal basis B

if the

original basis B is orthogonal or even orthonormal.

?

a: If B is orthonormal, then

b

k

= b

k

for each k .

b: If B is orthogonal, then

b

k

=

b

k

b

k

2

for each k .

Finding the reciprocal basis corresponding to a nonorthogonal basis is a little more chal-

lenging.

?

Exercise 2.27: Let V be a two-dimensional traditional vector space with standard basis

{ i, j } , and let

B = { b

1

, b

2

}

be the basis with

b

1

= 3i and b

2

= i + 2j .

Find the reciprocal basis

B

= { b

1

, b

2

}

and sketch both the vectors in B and B

?

Exercise 2.28: Let V be a three-dimensional traditional vector space with standard basis

{ i, j, k } , and let

B = { b

1

, b

2

, b

3

}

be the basis with

b

1

= i + j , b

2

= 2j and b

3

= j 2k .

Find the reciprocal basis

B

= { b

1

, b

2

, b

3

}

and sketch both the vectors in B and B

the b

k

s .)

version: 11/28/2011

Traditional Vector Theory Chapter & Page: 223

So what? Well, let v and w be two vectors in V . These vectors will have components

(v

1

, v

2

, . . . , v

N

) and (w

1

, w

2

, . . . , w

N

)

with respect to B, along with components

(v

1

, v

2

, . . . , v

N

) and (w

1

, w

2

, . . . , w

N

)

with respect to B

. That is,

v =

N

i =1

v

i

b

i

=

N

j =1

v

j

b

j

and

w =

N

m=1

w

m

b

m

=

N

n=1

w

n

b

n

.

From this, you can easily show that

v w =

N

j =1

v

j

w

j

and v =

_

N

j =1

v

j

v

j

even when B is not orthogonal.

?

That is the importance of the reciprocal basis. It lets us have a simple component formula

for the inner product and the norm, provided we use the two different bases for the components.

Using the above, you can also discover that the reciprocal basis gives an easy way to nd

components of vectors with respect to an arbitrary basis (provided youve found that reciprocal

basis).

?

space, and let v be a vector in this space. Show that each component of v with respect to

one of these bases can be expressed as an inner product of v with the corresponding element

of the other basis. More precisely, show that

v =

N

k=1

v

k

b

k

where v

k

= b

k

v

and

v =

N

k=1

v

k

b

k

where v

k

= b

k

v .

(Later, compare this with the approach described in the next chapter.)

Now, use these results in the next exercises.

version: 11/28/2011

Traditional Vector Theory Chapter & Page: 224

?

the components with respect to each of these bases for each of the following vectors:

i , j and 3i 5j .

?

the components with respect to each of these bases for each of the following vectors:

i , j and 3i 5j + 2k .

version: 11/28/2011

- Space ResectionCaricato daFreddy Doni Hutson Pane
- Traction VectorsCaricato daBrajesh kumar
- My Revision Notes (Entire Unit) c4 edexcel notesCaricato dajun29
- Vectors+and+3-D+GeometryCaricato daapi-3728411
- Physics Formal ReportCaricato daJames Valdez
- Tutorial 3Caricato daSiddhesh Joshi
- 02_LinearAlgebraReviewCaricato daRashmi Gupta
- Force DiagramsCaricato dakasreedhar
- Statics Chapter 2Caricato daKarl Kronos
- Chapter3-20171011Caricato daNishant Naidu
- Jason Cantarella, Dennis Deturck and Herman Gluck- Upper Bounds for the Writhing of Knots and the Helicity of Vector FieldsCaricato daKeomss
- DynamicsCaricato daMichael
- Electric Field.pdfCaricato daadinugroho
- A5_StructModeling_2010Caricato daponjove
- Basis FunctionsCaricato daAshok Behera
- Study Guide (Unit_1)Caricato daA.Benson
- LecChapter01Caricato daShalin Ye
- Cauchy Dirichket Problems Fo a Class ParabolicCaricato daAlejandra
- coordinates.pdfCaricato daashok junapudi
- wavelet transform tutorial part3Caricato dapowerextreme
- LEC 03 (Curveliniear 051Caricato daTommyVercetti
- AnnexureCaricato dafanthome2009
- Distance vs DisplacementCaricato daKrissel Lopez
- WS1.25solnCaricato daPhimjunkie
- Sparse Sources Are Separated SourcesCaricato davivek5457
- Analyze & Evaluate a TrussCaricato damrw3112
- coordtransforms.pdfCaricato dapranav
- Correspondence Between 2D Force Networks, Tensegrity StructuresCaricato daTensegrity Wiki
- Vectors Review IB Paper 2 QuestionsCaricato daJasmine Yim

- Psice SymbolCaricato daAriana Ribeiro Lameirinhas
- Simulink 8.3 - Release 2014a - ElectronicsCaricato daAriana Ribeiro Lameirinhas
- Torque csCaricato daAriana Ribeiro Lameirinhas
- Introduction ElectromagneticsCaricato daAriana Ribeiro Lameirinhas
- Part4Caricato dajagadees21
- Tea 5764Caricato daAriana Ribeiro Lameirinhas
- Tea 5777Caricato daAriana Ribeiro Lameirinhas
- Tea 5767Caricato daAriana Ribeiro Lameirinhas
- Wurth 744771133Caricato daAriana Ribeiro Lameirinhas
- TDK NL453232Caricato daAriana Ribeiro Lameirinhas
- Murata LQH55PCaricato daAriana Ribeiro Lameirinhas
- XiangYong ZhouCaricato daAriana Ribeiro Lameirinhas
- Part5Caricato daAriana Ribeiro Lameirinhas
- SCAA048 - Filtering TechniquesCaricato daAriana Ribeiro Lameirinhas
- High Speed Analog DesignCaricato dabigdogguru
- Chapter 5 - Analog Integrated Circuit Design by John ChomaCaricato daAriana Ribeiro Lameirinhas
- Chapter 4 - Analog Integrated Circuit Design by John ChomaCaricato daAriana Ribeiro Lameirinhas
- Chapter 3 - Analog Integrated Circuit Design by John ChomaCaricato daAriana Ribeiro Lameirinhas
- Chapter 2 - Analog Integrated Circuit Design by John ChomaCaricato daAriana Ribeiro Lameirinhas
- Chapter 1 - Analog Integrated Circuit Design by John ChomaCaricato daAriana Ribeiro Lameirinhas
- Chapter 6 - Analog Integrated Circuit Design by John ChomaCaricato daAriana Ribeiro Lameirinhas
- Anderson Final ETD VersionCaricato daAriana Ribeiro Lameirinhas
- AMS ThesisCaricato daAriana Ribeiro Lameirinhas
- ACS800 Manual for Laboratory WorksCaricato daAriana Ribeiro Lameirinhas
- Nice ContactCaricato daAriana Ribeiro Lameirinhas
- 108104011Caricato daRaja Reddy
- Telit LE70-868Caricato daAriana Ribeiro Lameirinhas
- multi level open end windingCaricato dalohitashriya
- -2008- Space Vector Based Hybrid PWM Techniques for Reduced Current Ripple by Gopalaratnam Narayanan and Di ZhaoCaricato daAriana Ribeiro Lameirinhas
- At Mega 32Caricato daMerce Chacon Vasquez

- 7 Wonders of the WorldCaricato daRimpa Dey
- In GATE 15 Paper New2Caricato daAshish Kumar
- Prana - ChiCaricato damsngr00
- systemc_quickreferenceCaricato daChaitanya Bandi
- A 'C' Test_ The 0x10 Best Questions for Would-be Embedded Programmers.pdfCaricato dasiva.rs
- Antiviral ActivityCaricato daStucchi Andrea
- Bon Bon Appetit Business Plan.pdfCaricato daRhealyn Manuba
- Acoustic Wave GuidesCaricato daikhwan_2k8
- z 5 week summaryCaricato daapi-263459333
- MasoudiCaricato daFahim Shah
- 92947505Caricato daRozika Nervozika
- thrifty gene hipótese furada nature.pdfCaricato daLeonardoCampanelli
- research chapter 1Caricato daapi-234168635
- 07_A.Rustoiu_WANDERING WARRIORS. THE CELTIC GRAVE FROM “SILIVAŞ”.pdfCaricato daIvo Yonovski
- OUAF - Application Framework for CCB and RMB - Overview Part 1 of 3Caricato dachinna
- Who is the Morning Star and the Bright Morning Star -1Caricato damofiyinfolu
- 71st PCC Agenda 1Caricato daNilesh Thakre
- Marketing-3.0-Philip-Kotler-imkhoinghiep.pdfCaricato datanvf
- chapter 1Caricato daNadhirah Khatijah
- St. Rita Parish Bulletin 4/5/2015Caricato dastritaschool
- ECO350 - Chapter01Caricato daJennifer Maamary
- Jose v. AlfuertoCaricato daArielPatricioMaghirang
- Moda en La Primera Guerra MundialCaricato daNicole Bayas Mancheno
- 0471321052 CircuitsCaricato daSriram
- An Introduction to Financial Ratio AnalysisCaricato dashabnoor1
- Community Assets First: The implications of the Sustainable Livelihoods Approach for the Coalition agendaCaricato daOxfam
- Constitutional Law PrelimsCaricato daRandy F Babao
- ddhardware_maintaince_studentguideCaricato daAbhishek Pubbisetty
- hse_ethesis_12260Caricato daKumaresan Ramanathan
- Chapter 1Caricato daOrçun Cengiz

## Molto più che documenti.

Scopri tutto ciò che Scribd ha da offrire, inclusi libri e audiolibri dei maggiori editori.

Annulla in qualsiasi momento.