## Trova il tuo prossimo book preferito

*Abbonati oggi e leggi gratis per 30 giorni*Inizia la tua prova gratuita di 30 giorni

## Informazioni sul libro

# Advanced Calculus of Several Variables

## Descrizione

- Editore:
- Dover Publications
- Pubblicato:
- Oct 10, 2012
- ISBN:
- 9780486131955
- Formato:
- Libro

## Informazioni sull'autore

## Correlati a Advanced Calculus of Several Variables

## Anteprima del libro

### Advanced Calculus of Several Variables - C. H. Edwards

*Variables *

*Variables*

### I

### Euclidean Space and Linear Mappings

to itself. Multivariable calculus deals in general, and in a somewhat similar way, with mappings from one Euclidean space to another. However a number of new and interesting phenomena appear, resulting from the rich geometric structure of *n**n*.

*n *in some detail, as preparation for the development in subsequent chapters of the calculus of functions of an arbitrary number of variables. This generality will provide more clear-cut formulations of theoretical results, and is also of practical importance for applications. For example, an economist may wish to study a problem in which the variables are the prices, production costs, and demands for a large number of different commodities; a physicist may study a problem in which the variables are the coordinates of a large number of different particles. Thus a real-life

problem may lead to a high-dimensional mathematical model. Fortunately, modern techniques of automatic computation render feasible the numerical solution of many high-dimensional problems, whose manual solution would require an inordinate amount of tedious computation.

**1THE VECTOR SPACE ****n **

*n *is simply the collection of all ordered *n*-tuples of real numbers. That is,

Recalling that the Cartesian product *A × B *of the sets *A *and *B *is by definition the set of all pairs (*a*, *b**n *(*n **n*.

³, obtained by identifying the triple (*x*1, *x*2, *x*3) of numbers with that point in space whose coordinates with respect to three fixed, mutually perpendicular coordinate axes

are *x*1, *x*2, *x*3 respectively, is familiar to the reader (although we frequently write (*x*, *y*, *z*) instead of (*x*1, *x*2, *x**n *in terms of *n *mutually perpendicular coordinate axes in higher dimensions (however there is a valid question as to what perpendicular

means in this general context; we will deal with this in **Section 3). **

*n *are frequently referred to as *vectors*. Thus a vector is simply an *n*-tuple of real numbers, and *not *a directed line segment, or equivalence class of them (as sometimes defined in introductory texts).

*n *is endowed with two algebraic operations, called *vector addition *and *scalar multiplication *(numbers are sometimes called scalars for emphasis). Given two vectors **x **= (*x*1, . . . , *xn*) and **y **= (*y*1, . . . , *yn**n*, their *sum ***x + y **is defined by

, the *scalar multiple a***x **is defined by

For example, if **x **= (1, 0, −2, 3) and **y **= (−2, 1, 4, −5) then **x + y **= (−1, 1, 2, −2) and 2**x **= (2, 0, −4, 6). Finally we write **0 **= (0, . . . . , 0) and −**x **= (−1)**x, **and use **x − y **as an abbreviation for **x **+ (−**y**).

The familiar associative, commutative, and distributive laws for the real numbers imply the following basic properties of vector addition and scalar multiplication:

(Here **x, y, z ***n*, and *a *and *b *. For example, to prove V6, let **x **= (*x*1, . . . , *xn*). Then

The remaining verifications are left as exercises for the student.

A *vector space *is a set *V *together with two mappings *V × V → V *× *V → V*such that **x + 0 = x **such that **x **+ (−**x**) = **0***n *is a vector space. For the most part, all vector spaces that we consider will be either Euclidean spaces, or subspaces of Euclidean spaces.

By a *subspace *of the vector space *V *is meant a subset *W *of *V *that is itself a vector space (with the same operations). It is clear that the subset *W *of *V *is a subspace if and only if it is closed

under the operations of vector addition and scalar multiplication (that is, the sum of any two vectors in *W *is again in *W*, as is any scalar multiple of an element of *W*)—properties V1–V8 are then inherited by *W *from *V*. Equivalently, *W *is a subspace of *V *if and only if any linear combination of two vectors in *W *is also in *W *(why?). Recall that a *linear combination *of the vectors **v**1, . . . , **v***k *is a vector of the form *a*1 **v**1 + · · · + *ak ***v***k*. The *span *is the set *S *of all linear combinations of them, and it is said that *S *is *generated *by the vectors **v**1, . . . , **v***k*.

** Example 1 n is **a subspace of itself, and is generated by the

*standard basis vectors*

since (*x*1, *x*2, . . . , *xn*) = *x*1 **e**1 + *x*2 **e**2 + · · · + *xn ***e***n**n *consisting of the zero vector alone is a subspace, called the *trivial **n*.

*Example 2*The *n **n **n*−1.

** Example 3Given **such that

*a*1

*x*1 + · · · +

*an xn*

*n*(see

**Exercise 1.1).**

** Example 4The **span

*S*

*n*of

*S*, and real numbers

*r*and

*s*

³ that are generated by a pair of non-collinear vectors. We will see in the next section that every subspace *V **n *is generated by some finite number, at most *n*, of vectors; the dimension of the subspace *V *will be defined to be the minimal number of vectors required to generate *V**n *of all dimensions between 0 and *n *³.

*Example 5*If *V *and *W **n*(the set of all vectors that lie in both *V *and *W*). See **Exercise 1.2. **

Although most of our attention will be confined to subspaces of Euclidean spaces, it is instructive to consider some vector spaces that are not subspaces of Euclidean spaces.

** Example 6Let **. If

*f + g*and

*af*are defined by (

*f + g*)(

*x*) =

*f*(

*x*) +

*g*(

*x*) and (

*af*)(

*x*) =

*af*(

*x*

*n*is the set of all polynomials of degree at most

*n*

*n*which is generated by the polynomials 1,

*x*,

*x*², . . . ,

*xn*.

*Exercises *

**1.1**Verify **Example 3. **

**1.2***n *is also a subspace.

**1.3**Given subspaces *V *and *W **n*, denote by *V + W *the set of all vectors *v + w *. Show that *V + W **n*.

**1.4**If *V *such that *x *+ 2*y *= 0 and *x + y *= 3*z*, show that *V *³.

**1.5**0 denote the set of all differentiable real-valued functions on [0, 1] such that *f*(0) = *f*0 is a vector space, with addition and multiplication defined as in **Example 6. Would this be true if the condition f(0) = f(1) = 0 were replaced by f(0) = 0, f(1) = 1? **

**1.6**Given a set *S*(*S*) the set of all real-valued functions on *S*, that is, all maps *S → R*(*S*) is a vector space with the operations defined in **({1, . . . , nn may be regarded as the n-tuple (φ(1), φ(2), . . . , φ(n)). **

**2SUBSPACES OF ****n **

*n *has precisely *n *− 1 types of *proper *subspaces (that is, subspaces other than **0 ***n *itself)—namely, one of each dimension 1 through *n *− 1.

In order to define dimension, we need the concept of linear independence. The vectors **v**1, **v**2, . . . , **v***k *are said to be *linearly independent *provided that no one of them is a linear combination of the others; otherwise they are *linearly dependent*. The following proposition asserts that the vectors **v**1, . . . , **v***k *are linearly independent if and only if *x*1 **v**1 + *x*2 **v**2 + · · · + *xk ***v***k *= **0 **implies that *x*1 = *x*2 = · · · = *xk *= 0. For example, the fact that *x*1 **e**1 + *x*2 **e**2 + · · · + *xn ***e***n *= (*x*1, *x*2, . . . , *xn*) then implies immediately that the standard basis vectors **e**1, **e**2, . . . , **e***n **n *are linearly independent.

**Proposition 2.1 **The vectors **v**1, **v**2, . . . , **v***k *are linearly dependent if and only if there exist numbers *x*1, *x*2, . . . , *xk*, not all zero, such that *x*1 **v**1 + *x*2 **v**2 + · · · + *xk ***v***k *= **0. **

PROOFIf there exist such numbers, suppose, for example, that *x*1 ≠ 0. Then

so **v**1, **v**2, . . . , **v***k *are linearly dependent. If, conversely, **v**1 = *a*2 **v**2 + · · · + *ak ***v***k*, then we have *x*1 **v**1 + *x*2 **v**2 + · · · + *xk ***v***k *= **0 **with *x*1 = −1 ≠ 0 and *xi *= *ai *for *i *> 1.

** Example 1To **show that the vectors

**x**= (1, 1, 0),

**y**= (1, 1, 1),

**z**= (0, 1, 1) are linearly independent, suppose that

*a*

**x**+

*b*

**y**+

*c*

**z = 0.**By taking components of this vector equation we obtain the three scalar equations

Subtracting the first from the second, we obtain *c *= 0. The last equation then gives *b *= 0, and finally the first one gives *a *= 0.

** Example 2The **vectors

**x**= (1, 1, 0),

**y**= (1, 2, 1),

**z**= (0, 1, 1) are linearly dependent, because

**x − y + z = 0.**

It is easily verified (**Exercise 2.7) that any two collinear vectors, and any three coplanar vectors, are linearly dependent. This motivates the following definition of the dimension of a vector space. The vector space V has dimension n, dim V = n, provided that V contains a set of n linearly independent vectors, while any n + 1 vectors in V are linearly dependent; if there is no integer n for which this is true, then V is said to be infinite-dimensional. Thus the dimension of a finite-dimensional vector space is the largest number of linearly independent vectors which it contains; an infinite-dimensional vector space is one that contains n linearly independent vectors for every positive integer n. **

** Example 3Consider **. The functions 1,

*x*,

*x*², . . . ,

*xn*are linearly independent because a polynomial

*a*0 +

*a*1

*x*+ · · · +

*an xn*is infinite-dimensional.

One certainly expects the above definition of dimension to imply that Euclidean *n**n *does indeed have dimension *n*. We see immediately that its dimension is at least *n*, since it contains the *n *linearly independent vectors **e**1, . . . , **e***n**n *is precisely *n*, we must prove that any *n **n *are linearly dependent.

Suppose that **v**1, . . . , **v***k *are *k > n **n*, and write

We want to find real numbers *x*1, . . . , *xk*, not all zero, such that

Thus we need to find a nontrivial solution of the homogeneous linear equations

By a *nontrivial *solution (*x*1, *x*2, . . . , *xk*) of the system (**1) is meant one for which not all of the xi are zero. But k > n, and (1) is a system of n homogeneous linear equations in the k unknowns x1, . . . , xk. (Homogeneous meaning that the right-hand side constants are all zero.) **

It is a basic fact of linear algebra that any system of homogeneous linear equations, with more unknowns than equations, has a nontrivial solution. The proof of this fact is an application of the elementary algebraic technique of elimination of variables. Before stating and proving the general theorem, we consider a special case.

** Example 4Consider **the following three equations in four unknowns:

We can eliminate *x*1 from the last two equations of (**2) by subtracting the first equation from the second one, and twice the first equation from the third one. This gives two equations in three unknowns: **

Subtraction of the first equation of (**3) from the second one gives the single equation **

in two unknowns. We can now choose *x*4 arbitrarily. For instance, if *x*4 = 1, then *x*3 = −2. The first equation of (**of the system (2). **

The procedure illustrated in this example can be applied to the general case of *n *equations in the unknowns *x*1, . . . , *xk*, *k > n*. First we order the *n *equations so that the first equation contains *x*1, and then eliminate *x*1 from the remaining equations by subtracting the appropriate multiple of the first equation from each of them. This gives a system of *n *− 1 homogeneous linear equations in the *k *− 1 variables *x*2, . . . , *xk*. Similarly we eliminate *x*2 from the last *n *− 2 of these *n *− 1 equations by subtracting multiples of the first one, obtaining *n *− 2 equations in the *k *− 2 variables *x*3, *x*4, . . . , *xk*. After *n *− 2 steps of this sort, we end up with a single homogeneous linear equation in the *k − n *+ 1 unknowns *xn*, *xn*+1, . . . , *xk*. We can then choose arbitrary nontrivial values for the extra

variables *xn*+1, *xn*+2, . . . , *xk *(such as *xn*+1 = 1, *xn*+2 = · · · = *xk *= 0), solve the final equation for *xn*, and finally proceed backward to solve successively for each of the eliminated variables *xn*−1, *xn*−2, . . . , *x*1. The reader may (if he likes) formalize this procedure to give a proof, by induction on the number *n *of equations, of the following result.

**Theorem 2.2**If *k > n*, then any system of *n *homogeneous linear equations in *k *unknowns has a nontrivial solution.

By the discussion preceding *n *= *n*.

**Corollary 2.3 **Any *n **n *are linearly dependent.

We have seen that the Linearly Independent vectors **e**1, **e**2, . . . , **e***n **n*. A set of linearly independent vectors that generates the vector space *V *is called a *basis *for *V*. Since **x **= (*x*1, *x*2, . . . , *xn*) = *x*1**e**1 + *x*2 **e**2 + · · · + *xn ***e***n*, it is clear that the basis vectors **e**1, . . . , **e***n *generate *V uniquely*; that is, if **x **= *y*1**e**1 + *y*2 **e**2 + · · · + *yn ***e***n *also, then *xi *= *yi *for each *i**n *can be expressed in one and only one way as a linear combination of **e**1, . . . , **e***n*. Any set of *n *linearly independent vectors in an *n*-dimensional vector space has this property.

**Theorem 2.4**If the vectors **v**1, . . . , **v***n *in the *n*-dimensional vector space *V *are linearly independent, then they constitute a basis for *V*, and furthermore generate *V *uniquely.

, the vectors **v, v**1, . . . , **v***n *are linearly dependent, so by **Proposition 2.1 there exist numbers x, x1, . . . , xn, not all zero, such that **

If *x *= 0, then the fact that **v**1, . . . , **v***n *are linearly independent implies that *x*1 = · · · = *xn *= 0. Therefore *x *≠ 0, so we solve for **v: **

Thus the vectors **v**1, . . . , **v***n *generate *V*, and therefore constitute a basis for *V*. To show that they generate *V *uniquely, suppose that

Then

So, since **v**1, . . . , **v***n *are linearly independent, it follows that *ai *− *ai*′ = 0, or *ai *= *ai*′, for each *i*.

*n *has a basis which contains fewer than *n *elements. But the following theorem shows that this cannot happen.

**Theorem 2.5**If dim *V = n*, then each basis for *V *consists of exactly *n *vectors.

PROOFLet **w**1, **w**2, . . . , **w***n *be *n *linearly independent vectors in *V*. If there were a basis **v**1, **v**2, . . . , **v***m *for *V *with *m < n*, then there would exist numbers {*aij*} such that

Since *m < n*, **Theorem 2.2 supplies numbers x1, . . . , xn not all zero, such that **

But this implies that

which contradicts the fact that **w**1, . . . , **w***n *are linearly independent. Consequently no basis for *V *can have *m < n *elements.

*n*. If *V **n*by (**Corollary 2.3, and if k = n, then V n by Theorem 2.4. If k > 0, then any k linearly independent vectors in V generate V, and no basis for V contains fewer than k vectors Theorem 2.5). **

*Exercises *

**2.1**Why is it true that the vectors **v**1, . . . , **v***k *are linearly dependent if any one of them is zero? If any subset of them is linearly dependent?

**2.2***n *?

(a)(1, 0) and (1, 1).

(b)(1, 0, 0), (1, 1, 0), and (0, 0, 1).

(c)(1, 1, 1), (1, 1, 0), and (1, 0, 0).

(d)(1, 1, 1, 0), (1, 0, 0, 0), (0, 1, 0, 0), and (0, 0, 1, 0).

(e)(1, 1, 1, 1), (1, 1, 1, 0), (1, 1, 0, 0), and (1, 0, 0, 0).

**2.3**Find the dimension of the subspace *V *⁴ that is generated by the vectors (0, 1, 0, 1), (1, 0, 1, 0), and (1, 1, 1, 1).

**2.4**Show that the vectors (1, 0, 0, 1), (0, 1, 0, 1), (0, 0, 1, 1) form a basis for the subspace *V *⁴ which is defined by the equation *x*1 + *x*2 + *x*3 − *x*4 = 0.

**2.5**Show that any set **v**1, . . . , **v***k*, of linearly independent vectors in a vector space *V *can be extended to a basis for *V*. That is, if *k < n *= dim *V*, then there exist vectors **v***k *+ 1, . . . , **v***n *in *V *such that **v**1, . . . **v***n *is a basis for *V*.

**2.6**Show that **Theorem 2.5 is equivalent to the following theorem: Suppose that the equations **

have only the trivial solution *x*1 = · · · = *xn *= 0. Then, for each **b **= (*b*1, . . . , *bn*), the equations

have a *unique *solution. *Hint*: Consider the vectors **a***j *= (*a*1*j*, *a*2*j*, . . . , *anj*), *j *= 1, . . . , *n*.

**2.7**Verify that any two collinear vectors, and any three coplanar vectors, are linearly dependent.

**3INNER PRODUCTS AND ORTHOGONALITY **

*n **n *with an inner product. An *inner *(scalar) *product *on the vector space *V *is a function *V × V *, which associates with each pair (**x, y**) of vectors in *V ***x, y **, and satisfies the following three conditions:

The third of these conditions is linearity in the first variable; symmetry then gives linearity in the second variable also. Thus an inner product on *V *is simply a positive, symmetric, bilinear function on *V × V***0, 0 **= 0 (see **Exercise 3.1). **

The *usual inner product **n *is denoted by **x · y **and is defined by

where **x **= (*x*1, . . . , *xn*), **y **= (*y*1, . . . , *yn*). It should be clear that this definition satisfies conditions (*n *see Example 2 below), but we shall use only the usual one.

** Example 1Denote **the vector space of all continuous functions on the interval [

*a*,

*b*], and define

It is obvious that this definition satisfies conditions **SP2 and SP3. It also satisfies SP1, because if f(t0) ≠ 0, then by continuity (f(t))² > 0 for all t in some neighborhood of t0, so **

.

*Example 2*Let *a*, *b*, *c *be real numbers with *a *> 0, *ac − b*² > 0, so that the quadratic form *q*(**x**) = *ax*1² + 2*bx*1*x*2 + *cx*2² is positive-definite (see **x, y **= *ax*1*y*1 + *bx*1*y*2 + *bx*2*y*1 + *cx*2*y*² (why?). With *a = c *= 1, *b *².

An inner product on the vector space *V *, called its *norm ***x **. In general, a *norm *on the vector space *V *is a real-valued function **x ****x **on *V *satisfying the following conditions:

**0 **= 0.

on *V *is defined by

It is clear that **SP1–SP3 and this definition imply conditions N1 and N2, but the triangle inequality is not so obvious; it will be verified below. **

*n *is the *Euclidean norm *

*n**n***x **will denote the Euclidean norm unless otherwise specified.

*Example 3 *x *x**xn *}, the maximum of the absolute values of the coordinates of **x, ***n *(see **Exercise 3.2). **

*Example 4 *x *x**x**xn **n *(again see **Exercise 3.2). **

A norm on *V *provides a definition of the *distance d*(**x, y**) between any two points **x **and **y **of *V*:

Note that a distance function *d *defined in this way satisfies the following three conditions:

for any three points **x, y, z. **Conditions **D1 and D2 follow immediately from N1 and N2, respectively, while **

by N3. **Figure 1.1 indicates why N3 (or D3) is referred to as the triangle inequality. **

*Figure 1.1 *

The distance function that comes in this way from the Euclidean norm is the familiar Euclidean distance function

Thus far we have seen that an inner product on the vector space *V *yields a norm on *V*, which in turn yields a distance function on *V*, except that we have not yet verified that the norm associated with a given inner product does indeed satisfy the triangle inequality. The triangle inequality will follow from the *Cauchy–Schwarz inequality *of the following theorem.

**Theorem 3.1**is an inner product on a vector space *V*, then

[where the norm is the one defined by (**2)]. **

PROOFThe inequality is trivial if either **x **or **y **is zero, so assume neither is. If **u = x****x **and **v = y****y ****u ****v **= 1. Hence

or

Replacing **x **by −**x, **we obtain

also, so the inequality follows.

*n*, it takes the form

, with the inner product of **Example 1, it becomes **

note that

**x, y **= 0, in which case **x **and **y **are perpendicular (see the definition below), then the second equality in the above proof gives

This is the famous theorem associated with the name of Pythagoras (**Fig. 1.2). **

*Figure 1.2 *

Recalling the formula **x · y ****x ****y **cos *θ *², we are motivated to *define *the *angle *(**x, y**by

by the Cauchy–Schwarz inequality. In particular we say that **x **and **y **are *orthogonal *(or perpendicular) if and only if **x · y **(**x, y**) = arccos *π*/2 = 0.

A set of nonzero vectors **v**1, **v**2, . . . in *V *is said to be an *orthogonal set *if

whenever *i ≠ j*. If in addition each **v***i ***v***i*, **v***i *= 1, then the set is said to be *orthonormal*.

** Example 5The **standard basis vectors

**e**1, . . . ,

**e**

*n*

*n*.

** Example 6The **(infinite) set of functions

(see **Example 1 and Exercise 3.11). This fact is the basis for the theory of Fourier series. **

The most important property of orthogonal sets is given by the following result.

**Theorem 3.2**Every finite orthogonal set of nonzero vectors is linearly independent.

PROOFSuppose that

Taking the inner product with **v***i*, we obtain

**v***i*, **v***i *= 0 for *i ≠ j *if the vectors **v**1, . . . , **v***k ***v***i*, **v***i *≠ 0, so *ai *= 0. Thus (**3) implies a1 = · · · = ak = 0, so the orthogonal vectors v1, . . . , vk are linearly independent. **

We now describe the important *Gram–Schmidt orthogonalization process *for constructing orthogonal bases. It is motivated by the following elementary construction. Given two linearly independent vectors **v **and **w**1, we want to find a nonzero vector **w**2 that lies in the subspace spanned by **v **and **w**1, and is orthogonal to **w**1. **Figure 1.3 suggests that such a vector w2 can be obtained by subtracting from v an appropriate multiple cw1 of w1. To determine c, **

*Figure 1.3 *

**w**1, **v **− *c***w**= 0 for *c ***v, w****w**1, **w**. The desired vector is therefore

obtained by subtracting from **v **the "component of **v **parallel to **w****w**2, **w**= 0, while **w**2 ≠ 0 because **v **and **w**1 are linearly independent.

**Theorem 3.3**If *V *is a finite-dimensional vector space with an inner product, then *V *has an orthogonal basis.

*n *has an orthogonal basis.

PROOFWe start with an arbitrary basis **v**1, . . . , **v***n *for *V*. Let **w**1 = **v**1. Then, by the preceding construction, the nonzero vector

is orthogonal to **w**1 and lies in the subspace generated by **v**1 and **v**2.

Suppose inductively that we have found an orthogonal basis **w**1, . . . , **w***k *for the subspace of *V *that is generated by **v**1, . . . , **v***k*. The idea is then to obtain **w***k*+1 by subtracting from **v***k*+1 its components parallel to each of the vectors **w**1, . . . , **w***k*. That is, define

where *ci ***v***k*+1, **w***i ***w***i*, **w***i ***w***k*+1, **w***i ***v***k*+1, **w***i *− *ci ***w***i*, **w***i *, and **w***k*+1 ≠ 0, because otherwise **v***k*+1 would be a linear combination of the vectors **w**1, . . . , **w***k*, and therefore of the vectors **v**1, . . . , **v***k*. It follows that the vectors **w**1, . . . , **w***k*+1 form an orthogonal basis for the subspace of *V *that is generated by **v**1, . . . , **v***k*+1.

After a finite number of such steps we obtain the desired orthogonal basis **w**1, . . . , **w***n *for *V*.

It is the method of proof of **Theorem 3.3 that is known as the Gram–Schmidt orthogonalization process, summarized by the equations **

defining the orthogonal basis **w**1, . . . , **w***n *in terms of the original basis **v**1, . . . , **v***n*.

** Example 7To **find an orthogonal basis for the subspace

*V*⁴ spanned by the vectors

**v**1 = (1, 1, 0, 0),

**v**2 = (1, 0, 1, 0),

**v**3 = (0, 1, 0, 1), we write

** Example 8Let **denote the vector space of polynomials in

*x*, with inner product defined by

By applying the Gram–Schmidt orthogonalization process to the linearly independent elements 1, *x*, *x*², . . . , *xn*, the first five elements of which are

(see **Exercise 3.12). Upon multiplying the polynomials { pn(x)} by appropriate constants, one obtains the famous Legendre polynomials **

etc.

can be expressed as a linear combination of orthogonal basis vectors **w**1, . . . , **w***n *for *V*. Writing

and taking the inner product with **w***i*, we immediately obtain

so

This is especially simple if **w**1, . . . , **w***n *is an orthonormal basis for *V*:

Of course orthonormal basis vectors are easily obtained from orthogonal ones, simply by dividing by their lengths. In this case the coefficient **v · w***i *of **w***i *in (**5) is sometimes called the Fourier coefficient of v with respect to wicorresponding to the orthogonal functions of Example 6 are **

Writing

by

and

It can then be established, under appropriate conditions on *f*, that the infinite series

converges to *f*(*x*). This infinite series may be regarded as an infinite-dimensional analog of (**5). **

Given a subspace *V **n*, denote by *V **n*, each of which is orthogonal to every vector in *V*. Then it is easy to show that *V **n*, called the *orthogonal complement *of *V *(**Exercise 3.3). The significant fact about this situation is that the dimensions add up as they should. **

**Theorem 3.4**If *V **n*, then

PROOF By **Theorem 3.3, there exists an orthonormal basis v1, . . . , vr for V, and an orthonormal basis w1, . . . , ws for V . Then the vectors v1, . . . , vr, w1, . . . , ws are orthornormal, and therefore linearly independent. So in order to conclude from Theorem 2.5 that r + s = n n, define **

Then **y · v***i *= **x · v***i *− (**x · v***i*)(**v***i *· **v***i*) = 0 for each *i *= 1, . . . , *r*. Since **y **is orthogonal to each element of a basis for *V*(**Exercise 3.4). Therefore Eq. (5) above gives **

This and (**7) then yield **

so the vectors **v**1, . . . , **v***r*, **w**1, . . . , **w***s **n*.

** Example 9Consider **the system

homogeneous linear equations in *x*1, . . . , *xn*. If **a***i *= (*ai*1, . . . , *ain*), *i *= 1, . . . , *k*, then these equations can be rewritten as

Therefore the set *S *of all solutions of (**that are orthogonal to the vectors a1, . . . , ak. If V n generated by a1, . . . , ak, it follows that S = V (Exercise 3.4). If the vectors a1, . . . , ak are linearly independent, we can then conclude from Theorem 3.4 that dim S = n − k. **

*Exercises *

**3.1****0, 0 **= 0.

**3.2**Verify that the functions defined in *n*.

**3.3**If *V **n*, prove that *V *is also a subspace.

**3.4**If the vectors **a**1, . . . , **a***k *generate the subspace *V **n*.

**3.5**

**3.6**Let **a**1, **a**2, . . . , **a***n **n*. If **x **= *s*1**a**1 + · · · + *sn***a***n *and **y **= *t*1**a**1 + · · · + *tn***a***n*, show that **x · y **= *s*1*t*1 + · · · + *sn tn*. That is, in computing **x · y, **one may replace the coordinates of **x **and **y ***n*.

**3.7**⁴.

**3.8**Orthogonalize the basis

*n*.

**3.9**Find an orthogonal basis for the 3-dimensional subspace *V *⁴ that consists of all solutions of the equation *x*1 + *x*2 + *x*3