Sei sulla pagina 1di 21

Elementary Mathematical

Concepts

Suppose you've no direction in you,


I don't see but you must continue
To use the gift you d o possess,
And sway with reason more or less.
Robert Frost
Mazes intricate
Eccentric, interwov 'd, yet regular
Then most, when most irregular they seem.
John Milton

I i 2.1 Introduction
To adequately grasp the underlying concepts involved in the structure,
development and use of a reservoir simulator, it is necessary that we have
an understanding of some basic mathematical tools. In particular, we employ
I some elementary ideas derived from vector analysis and matrix theory. In
I what follows, we review only those basic concepts that are essential to our
purpose. Furthermore, the material presented here is restricted to the prem-
I
I ise that solutions to reservoir engineering problems are to be achieved by
I numerical means.

2.2 Elementary Vector Analysis1


We recall that a vector is usually defined as a directed line segment. It is
a quantity having both magnitude and direction. If we denote a vector
by V, then its magnitude is denoted by lvl, also called the modulus or norm
of v. Thus, the velocity of a particle of fluid at a point P i n a reservoir R
is a vector. It is differentiated from the scalar quantities, temperature and
density at that point, in that the latter are not characterized by a directional
property.
Consider a cartesian coordinate system consisting of the x, y and z
12 Reservoir Simulation

axes. If a vector resides in this 3-dimensional space, then we can construct


its vector components by simply making projections on each of the coordi-
nate axes. Vector v is then the resultant of its vector components, i.e.,

v = v1 + v2 + vg. (2.1)

Eq. 2.1 can be further expressed in terms of "unit vectors" i, j, k each


with moduli unity having directions that parallel the x, y and z coordinate

Fig. 2.1 Cartesian Coordinates.


C

axes, respectively (see Fig. 2.1). Consequently, if lvll = .vl, lvzl = h , Iv31 =
14j then

where, y, Q, r 4 ~are the "scalar *componentss'of v. Now suppose we shift


the position of v along the ray extending from the origin in the direction
of v ; i.e., we move v either closer to or farther horn the origin uithout
changing its direction. Regardless of the position v occupies on this ray,
its representation is still given by Eq. 2.2. Stated another way, v is uniquely
determined by its scalar components. This observation makes possible an
alternative definition of a vector in 3-space; namely, a vector is simply an
ordered triple of numbers, i-e.,
1 Elementary Mathematical Concepts

We refer to the collection of all such vectors as a 3-dimensional Euclidean


space, EB, if Eqs. 2.6-2.9 (See section 2.2.2) are satisfied.
From the mathematical point of view, there is no reason to confine
I
ourselves to 3-dimensional Euclidean spaces. We can generalize the concept
; illustrated above and simply say that a vector is an ordered n-tuple of num-
bers n 2 1, and the collection of all such vectors constitutes an n-dimensional
Euclidean space, E n + . In adopting this definition, we recognize that the
concept of an n-dimensional vector (where n > 3) is an abstract one. It is
not possible to displalr it pictorially. Furthermore, the dimensions of a Eu-
clidean space should not be confused with the dimensionality of a reservoir
, which is modeled at most in 3-dimensions. Stated another way, a Euclidean
space is not the spatial configuration we assign to a reservoir. Rather it is
a mathematical entity that provides us a framework within which we can
discuss the numerical solution of reservoir engineering problems. Thus we
shall see, that the computation of pressure at n-ordered points in a reservoir
can be thought of as finding the vector (pl, p ~ p3, , . . . , pn) In this case,
1 7 ,

I
we speak of such a set of points as the "solution vector.

'I 2.2.1 Vector Gradient


a@**
Let Q, be a scalar function such that - - - are continuous atLsome
/ '
axyay7 az
I point P in R. Physically these represent rates of change with respect to
distance in each of the coordinate directions x, y and z. The gradient of
is given by

The "del" or "nabla" operator is frequently used as a differential operator

thus, grad @ = VQ,.


Suppose @ ( x , y, z ) = c represents surfaces, S,, in R for all values of
the constant c. We refer to the scalar field Q, whose gradient is V@ as the
Potential of the vector field V@. The corresponding surfaces, S,, are equipo-
tential surfaces. One of the problems we are confronted with in simulating
reservoir behavior is determining potential distributions, or the potential

f If q, i = 1, 2, . . . , n are real then E n is a real Euclidean n-space. If they are complex,


then En is a complex Euclidean n-space.
Reservoir Simulation
I
1
j

gradients V@, throughout the system. The potential gradient V@ has the
11
following important properties: III
I

(1) It is a vector function.


(2) Its direction is in the direction of maximum increase of <P.
(3) It is always perpendicular to the equipotential surface, S,, defined
by @(x, y, z) = c.
(4) It remains invariant under a coordinate transformation. I

2.2.2 Vector Algebra


At this point it is desirable to consider algebraic manipulations of vec-
tors. Consistent with our desire to only touch on those aspects needful to
our ultimate purposes, we make no effort to completely cover vector algebra.
I
Vector addition and subtraction are defined on a component basis. If

then we define
I
1

where the only requirement we impose on a and b is that they both have
the same number of components. In such a case, we say they are conformable
for addition. The following is also true:

a +b =b + a (commutativity) (2- 6)

a + (b+ c) = (a + b) + c (associativity) (2.7)

The vector, 0, is the null vector consisting of zero elements.


We consider also scalar multiplication of a vector. If a is a scalar and
a = (a1, 02, . . . , h ) then a a = (aal, a & , . . . , a h ) . Furthermore, a a =
i aa, a(a + b) = a a + a b and (a +p)a = a a + pa for p another scalar.
Moreover, (ap)a= a@a)and l a = a. All vectors in a Euclidian space satisfying
I 1
these properties of scalar multiplication constitute a vector space.
The dot or inner product of two vectors a and b is given by
1 Elementary Mathematical Concepts 15
i
1

It can also be shown that

a b = /a1Ib! cos 0

where 9 is the angle between a and b. If a b = 0 then we say a and b


are orthogonal. Furthermore, the length or norm of a vector is given by

la1 = (a a)lI2.

I The dot product satisfies the following properties:

a b =b a (commutativity) (2.12)

a (b + c) =a b +a c (distributive) (2.13)

Since the cartesian unit vectors i, j, k are at right angles to each other
(i.e., they are mutually orthogonal) then it follows that i i = j j = k l

k=lmdioj=i*k=j*k=O.
I
In Eq. 2.11 we can consider the quantity (a1 cos 8 as the component
'
I of vector a in the direction of b. This interpretation is useful in deriving
the continuity equation using vector analysis principles as will be seen. It
I
1 is important to notice that forming dot products of vectors results in a
1 scalar quantity. Thus, it is sometimes called the scalar product.
Another kind of product, the cross-product or vector product c& be
defined such that a vector rather than a scalar quantity is obtained. It is
particularly useful in depicting those processes characterized by rotational
flow. Since such regimes are generally negligible in global reservoir prob-
lems, we make no attempt to treat it here. The interested reader is referred
to any elementary text on vector vlalysis for further details.

2.2.3 Divergence

i
Let v ( x , y, z) be a velocity vector at a point Pin 3-space. The divergence
of v is given by

as a
.'.div v = - +-+-- a s
ax ay a~
r"
We observe the following facts about the divergence of a vector:

(1) It is a scalar quantity.


(2) It remains invariant under a coordinate transformation.
Reservoir Simulation

(3) If p is the density of a fluid at P having a velocity v, then pv = q 1


will be the mass j?ux at P. Physically, div q represents the rate of
decrease of mass per unit volume in the neighborhood of the point
P where div v is defined.

At this point, it is worthwhile to emphasize a common property of


both the gradient of a scalar quantity and the divergence of a vector: both
remain unchanged when the system of coordinates is altered. In other words,
they are independent of the coordinate system we desire to employ. This
means that if we can express the equations describing fluid flow in a reservoir
exclusively in terms of these vector quantities, they will be universally appli-
cable regardless of the coordinate system we impose on the system. The
key to doing this is to employ Gauss' theorem which is stated without proof
in the next section. Before treating that, it is of interest to consider the
divergence of the gradient of @, i.e., div (grad @) = V V@.

where

v2=-
a2 + a2 + 7 ,
7
ax2 ay an
a2
the Laplacian operator.

2.2.4 Gauss' Theorem and the Continuity Equation


Gauss' theorem (also called the Divergence Theorem) relates an integral
over a volume, R, to an integral defined on its surface, S, namely,

where v is a velocity vector in R, dv is a differential element of volume


in R, d u is a directed element of surface = n ds,and n is an outward drawn
unit vector normal to the scalar surface element, ds as depicted in Fig. 2.2.
If we consider the fluid flux q = pv at a point P then

d i v ( p ) dv=( pv n ds.
R S
Elementary Mathematical Concepts

Fig. 2.2 Element'of Surface in R.

Now since pv n ds = /pvl In dsi cos 0 = p ds l v cos 0 where 0 is the


angle between vectors n and v, then pv n ds physically represents the
component of the fluid flux escaping from R through the element of surface
: &in the direction of the outward drawn normal. Consequently, the integral
of this quantity over the entire surface of R, i.e., the right-hand side of
Eq. 2.19 represents the rate of decrease of mass from R. This can also be
expressed as

'
t where 4 is the porosity. Therefore it follows that

1 or combining Eq. 2.19 and Eq. 2.20

Since R is an arbitrary volume, it follows that the arguments of the integrals


Eq. 2.21 are identical, i.e.,

Eq. 2-22 is known as the continuity equation. It simply is an expression of


I
the law of conservation of mass at a point P in R.
If a source or sink is present at the point P, then we add a mass rate
term7 say, to the continuity equation,
Reservoir Simulation

The choice of sign on the additive term is purely arbitrary. We adopt the
convention that the minus sign represents a source and the plus sign a
sink.

2.3 Matrix Methods2

2.3.1 Matrices
A matrix is simply a rectangular array of elements arranged in horizontal
rows and vertical columns. Thus,

are examples of matrices. We say a matrix is of order m x n if it consists


of m rows and n columns. If m = n, we call the matrix a square\matrix of
nth order. Consequently, A is a 3rdorder square matrix, B is 3 x 2, and C
is 2 x 3. In general, an m x n matrix will be denoted by

This can be conveniently abbreviated by A = [ai,] meaning A is a collection


of elements with row index i and column index j. is
The elements of a matrix need not be numbers. They can be functions, o
operators or even other matrices. We emphasize that a matrix is not a si
number nor can it be evaluated (like a determinant) to yield a number.

2.3.2 Types of Matrices


For our purposes, we will be working with square matrices almost exclu- ir
sively, i.e., matrices of the form rr
I
Elementary Mathematical Concepts 19
1

The collection of elements aii is called the main diagonal of the matrix. i
If all the elements of A are zero except possibly those on the main diagonal
then A is called a diagonal matrix. This is conveniently depicted by

A [ l . . . . " . n ] .

If aii = a, a constant for all i then A is called a scalar matrix. An important


scalar matrix, called the identity matrix, arises when a = 1,and is symbolized
by I.
A lower triangular matrix, L, is a square matrix where ai, = 0 for
i < j while an upper triangular matrix, U, has elements a,, = 0 for i > j.
We will have occasion to consider the transpose of a matrix, i.e., if A =
~ ] the transpose of A denoted by A T is given by A T = [aji].Notice
[ ~ i then
to form A Twe merely interchange rows and columns of A. Thus for

A square matrix A is said to be symmetric if A = AT. If A = -AT then


. it is skew symmetric. It will be observed that the vector v = (01, s, . . . ,
&) is a 1 X n matrix, i.e., it is a row matrix. Similarly v T, its transpose,

is an n x 1 column matrix. Furthermore, we refer to the rows and columns


I
of an n X n matrix A as the row vectors and column vectors, respectively
since each is an ordered n-tuple of elements.

I
2.3.3 Matrix Operations
t - Below we outline the essential operations involving matrices. These
' ~ k v o l v eequality, addition and subtraction, scalar multiplication and matrix
I bdtiplication.
TWOmatrices A and B are equal if and only if they are of the same
Order and oij = b, for every i, j. If A = [atr]and B = [bu] and A and B
. -9 of the same order, then we can write C = A & B where Gj = a i j 2 bij
every i, j. If a is a scalar and A = [ao] then we can say B = a A where
aau for every i, j. The following properties hold for these operations:
I:

Reservoir Simulation

A+B=B+A
A+(B+C)=(A+B)+C 1
A+O=A I1
a A = Aa :I
a(A+B)=aA+aB I.

(a1 + +
a2)A= a l A a2A
i
(a1 a2)A =

The matrix, 0, is the zero or null matrix having all entries zero:
We would also like to have a definition of matrix multiplication; i-e.,
if A = [aj]and B = [bij]then we consider the product AB. If A is m X n s
and B is n x p then for C = AB we define

The important requirement for matrix multiplication is that the number


of columns of A must equal the number of rows of B. If this is satisfied
'
we say A and B are comformable for multiplication. The resultant matrix
C will be of order m x p. Thus, if A is 4 x 3 and B is 3 x 5, C will be
4 x 5. On the other hand, if A is 6 x 4 and B is 3 x 6, AB is undefined,
however, BA will be defined since (3 x 6) (6 x 4) has adjacent numbers
(indicated by asterisks) equal. In general, one may determine conformability
for multiplication by considering the expression (m x 6) (6 X pl. If the
starred quantities are equal, then multiplication is defined. We consider
an example: Find AB and BA if

Notice both products AB and BA are defined, but AB =t= BA. In general,
matrix multiplication is not commutative. However, matrix multiplication
satisfies the following properties: +
Elementary Mathematical Concepts 21

We know that if x and y are real numbers then xy = 0 implies that either
x = 0 or y = 0 or both. Matrices, however, do not possess this property.
Furthermore, if AB = AC then this does not imply that B = C. Consequently,
cancellation or division is not a valid operation in matrix algebra.

2.3.4 Determinants
A determinant is a single number that we associate with a square matrix
A. It is denoted by

It should not be confused with the matrix itself. To compute det(A) we


consider minors and cofactors. Given a square matrix A, a minor is the
determinant of any square submatrix of A obtained by removal of an equal
number of rows and columns. The cofactor of the element aij is a scalar
obtained by multiplying together the term (-l)'+jand the minor Mij ob-
tained by removing the i t h row and the j t h column. To find det(A) we
proceed as follows: (1)Select any row or column of A (say the kth row or
column); (2) for each element in this row or column find the cofactor,
Gj = (-l)i+j Mij ( i = k or j = kj; (3) multiply each element in the row or
column selected by Cij and sum the results. This is det(A). Thus, if we
selected the k f h row,

or the kth column

The amount of work involved in computing the determinant of an


nth order matrix when n is large is awesome for a high speed computing
machine. Consequently, we avoid it if at all possible. For example, if A is
25 x 25 then 25! multiplications are required. On a Cray 1, 2.5 X 10-8
seconds are required per multiplication.^ Thus, it would take in excess of '
1
101° years to evaluate ( A J . 1
i
i
2.3.5 Matrix Inverse and Simultaneous Equations i
The inverse of an n x n matrix is a square matrix B satisfying
1

and is denoted by B = A-I. Not every square matrix has an inverse. If


( A (P 0 then A-1 exists and we say A is invertible or nonsingular. If, on
the other hand / A (= 0 then A-1 will not exist and A is singular. The utility
of the inverse is seen when we consider the solution of simultaneous alge-
braic equations. For example, suppose we wish to solve the following set
for x, y, and z:

We can write this problem in terms of a matrix equation Ax = b where


I
A, called the coeficient matrix is

A=[+
5 -3

0
1 - -3
21
.=[3 Lmdb=[Lf].

For this problem, ( A JP 0; thus, A is nonsingular and the solution vector x


can be found by premultiplying the matrix equation by A-I, i.e., I

As we shall subsequently see, techdques are employed to reduce the


reservoir fluid flow equations to systems of simultaneous algebraic equations
of the form

f Assuming vectorized code and 100%e5ciency.


I Elementary Mathematical Concepts

which are most conveniently solved by matrix analysis. Eq. 2.27 can be
represented in the compressed matrix form Ax = b where

2.3.6 Matrix Eigenvalue Problem


Many applications of matrices require a solution to the problem Ax =
Ax where A is nth order, x is a nonzero vector and A is a scalar. We want
to find, for a given matrix A, those numbers A such that a matrix multiplica-
tion of a vector x yields the same thing as the scalar multiplication Ax.
This is known as the matrix eigenvalue problem where A is an eigenvalue
and x is the eigenvector associated with A. Since Ax = Ax, we can also write
(A - AI)x = 0. This corresponds to a homogeneous set of n algebraic equations
in n unknowns. Obviously x = 0 is a solution (the trivial one); however,
we exclude this possibility since we restrict x to be nonzero. It can be
shown that nontrivial solutions will exist if and only if ( A - A11 = 0.The
expansion of this determinant yields a polynomial of degree n called the
characteristic polynomial, p(A) say, whose roots (Ar}in_, are the eigenvalues
we seek. They may be real or complex numbers. The spectral radius of
matrix A is defined by

p(A) = max IAil.


lSiSn

I
II It plays an important role in determining whether or not a stable, convergent
solution is possible for a given matrix problem.

2.4 Solution of Simultaneous Linear Algebraic


Equations3,4
Techniques for solving systems of linear algebraic equations in a reservoir
simulator can be broadly categorized as direct or iterative methods. Each
; , has its particular advantages and disadvantages which will be discussed more
1 mylater.
t
I

2.4.1 Gaussian Elimination


We consider solving Eq. 2.27 for the unknowns xl, xz, . . . , xn assuming
the aij's and hiss are known. The method of Gaussian elimination, a direct
Reservoir Simulation

method, consists of reducing the system of n equations in n unknowns to


a system of ( n - 1) equations in ( n - 1) unknowns. Next the system of
( n - 1) equations in ( n - 1) unknowns is reduced to a system of ( n - 2 )
equations in ( n - 2 ) unknowns. This process is continued until one obtains
one equation in one unknown. Thus, the one unknown is determined. The
remaining are found by back-substitu tion.
To illustrate, consider the following example:

Subtract twice the first row from the second row and add twice the first
row to the third; thus,

. x,+x2+x3=2
- X 2 - x3=-1
+ +
3x2 2x3 = 4 . (2.30) I

Multiply the second row by 3 and add the result to the third row; then
the equations reduce to
I

Therefore,

Put the value of x3 = -1 in

Substitute the values of x2 and in xl x2 + +x3 = 2 and one gets xl = 1.


Note that in performing this procedure, the matrix and right-hand side
in Eq. 2.29 were
I ' Elementary Mathematical Concepts

Prior to performing the back-substitution these were transformed to Eq.


2.31

1 1 1

0 0 1

i.e., the matrix A was converted to an upper triangular matrix. Thus, solution
by Gaussian elimination is a triangularization of A to yield an upper triangu-
lar matrix U followed by a back solution for the vector x. To achieve this,
all elements below the main diagonal of A are eliminated.
Some variations of this procedure can be employed. Gauss-Jordan reduc-
tion avoids the step of back-substitution. We illustrate this by the same
example. The first step is the same as in Gaussian elimination to a m v e at
Eq. 2.30. We then proceed as follows: multiply the second row of Eq. 2.30
by 3 and 1 and add the results to the third and the first row, respectively

I Next, add (-1) times the third row to the second row, hence I

Then

and
l

iii'
b
I
y

I?
Another technique is to factor matrix A into lower and upper triangular
matrices L and U. Thus we have for Ax = b, LUX = b where A = LU.
k t Ux 5 y, then the problem is solved in the following sequence of steps:

where lij = 0, i < j


11 1
(1) Factorization, ie., find L = [lij]and U =
and u i j = 0, i > j.
[uij]

I
(2) Solve Ly = b for y.
(3) Solve Ux = y.
26 Reservoir Simulation

In (1)U will be unit upper triangular, i.e., with ones on the main diagonal.
Step (2) is a forward solution for y and (3) is a backward solution for x.
This technique is called LU decomposition and is identical with Choleski's
(or Banachiewicz's) method for symmetric matrices. We illustrate the proce-
dure for the problem treated before.
The LU factors of A are

The forward solution involves

Thus,

and the back solution is

Therefore,

x3 = -1, x2 = 2, and xl = 1.

A number of solution techniques employed in reservoir simulators, both


direct and iterative, have their basis in the LU-decomposition concept. The
algorithm for h d i n g the L and U factors is given in Appendix A.6.

2.4.2 Iterative Methods


We consider finding a solution to the matrix equation Ax = b (where
A is n x n) by iteration. Suppose we divide each row of A by its diagonal
element (assuming aii # 0 for every i); then
Elementary Mathematical Concepts 27

where D is a diagonal matrix with diz = 11Q i i , i = 1, 2, . . . , n and B is


an n X n matrix consisting. of zeros on the diagonal and the off-diagonals
are -a,la,,, i # j, i = 1, 2, 3, . . . , n. Thus we can write

x=Bx+c. (2.35)

A method of successive approximations is given by

xtl+l) = ~ ~ (+1c ) (2.36)

where 9, is an iteration level (x(O)is arbitrary). Eq. 2.36 defines a convergent


process if for any given x(O),the sequence {x(l)J.!= 1, 2, 3, . . . } converges.
If the spectral radius of B is less than one, then convergence is guaranteed
for most iterative processes.
We illustrate the procedure with the example previously employed,
.e., Eq. 2.29. To assure that the diagonals are all nonzero, we rewrite the

+ + =2
X3 X2 X1
x3 + x, + 2x1 = 3
xz - 2x1 = 0. (2.37)
I

Matrix-wise this amounts to a column interchange of the first and last col-
umns. We could have alternatively interchanged the first and last rows.
The iteration matrix, B, is

.=[-I
0 -1

0 +%
0 -;I
-1

1 1
0

0 0
[;I=[;].
ye take as a first guess, x(O) =
[:I 1 then we get after 10 iterations,

x(lO)=
-0.875
1.75[
0.9375
1.
28 Reservoir Simulation

Obviously, the rate of convergence is quite slow. However, there are meth-
ods for accelerating the convergence rate. This will be discussed later when
we treat solution techniques for reservoir models.

2.5 Linear Algebras

Linear algebra provides an overall framework of rules within which we


can manipulate vectors, matrices, etc. In the following, we briefly touch
on some basic definitions.

2.5.1 Definitions
Let V, be a vector space consisting of the set of vectors (xi1i = 1, 2,
. . . , n). If there exists a set of scalars (afli= 1, 2, . . . , n) and we fohn
n
the sum u = ai xi where u is also a vector in Vn, then we say that u is
i=l
a linear combination of the xi's.
n '
A set of vectors (XiJin=,is called linearly independent if aixi = 0
f-1
implies ai = 0 for every i. If the set is not linearly independent then it is
linearly dependent, i.e., if there exists some scalars at, not all zero such
that

n
2 atxi = 0, with some ai * 0.
i=1

If every vector in Vn can be written as a linear combination of the set


n
{xi]g1,i.e., if u = aixi for every u in vn then we say that the set
f=1
spans the space. For example, the set of vectors i, j, and k spans 3-space.
Furthermore, if the set {xi],",,is also a linearly independent set, then we
say that they form a basis of Vn. A set of vectors may span a vector space
and still not be a linearly independent set. However, if the spanning set
is a dependent set, we can always extract an independent set that also)
spans the space. In other words, every spanning set of vectors contains a
basis. Once we find a basis of a vector space, we can uniquely represent
any other vector in that space as a linear combination of the basis vectors.
Elementary Mathematical Concepts 29

2.6 Exercises

+
1. If b = 2i 2j - k and a is a scalar number, for what values of a is Jab1= l ?
2. Find a unit vector having the same direction as b where b = cos a i +

3. If la1 = Ibl is it necessarily true that a = b?


4. If b is nonzero and a = Ja(lJb1 what can you say about (abl?
5. Find V@ if

(c) @ = sin x cos y


6. If T and 8 are polar coordinates in the xy-plane, determine grad r and grad 8.
7. Find the divergence of the following:
++
(a) xy@i j k )
(b) yz2 i - zx2 k
$
8. Why is the following not valid? .>
,. ..
(a b) c = a m (b c) '
,
.I
,,!
,
.
9. Expand (A + B)2 if A and B are matrices cornformable for addition and multiplica- , '
.
*.1,t ti
1

I ,. . '!
>: 1);
i,,%..$
0. What must be true about a, b, c, and d if the matrices ,,I :,:
$ 1
; *l..i

[°c db] and [-1 '1


1
r t' !f-
,. .: ,..
: ;;::

*
&:

.
i..;
0

, I , .,
"

x
',' +''
are to commute? ,.
, i /..
,:.:.
w,,! ,'; .,

11. Denote by Ej an n-vector with 1 in the jth row, all other components being .: . 3. $.,.
, ,

zero. Interpret each of the products below where A is an arbitrary matrix of


.;i

11
i

(a) E T A (b) AE, (c) E T A E, (d) EjT Ek (e) E, E,T :


, .>
4%;
.,;
., i,tt!i

12- Show that every nonsymrnetric matrix can be written as the sum of a symmetric ~!.!
matrix and a skew-symmetric matrix. :I!
'7 /

i: ii
ill:
4;
+ .[l1:
I $7
! ; 1;
II ffl
. j :! !.$

,, .,.
: i :it
.
I;;$-
i il
! $:
I I:
I jl.
! 1;
; ;I.
i!
ij
:I :.;:
!
fhd AB. What is unusual about this result? ,
,
,
, 'I
~

! ~?:!
! ,I$.
!I!,
'2' 1:
.it,:'j;
<>:I!:
II
,.,%
:i
ii
h
30 Reservoir Simulation

16. Find the eigenvalues of the following matrices.

17. Is [:] an eigenvector of A = [i3?


18. (a) What is the spectral radius of

A = [0
0.53

0
0.43 -0.01
-0.25
0
0.92
.834
]
(b) Is A singular or nonsingular?
19. Solve the following problems by Gaussian elimination rounding all calculations
to three decimal places. Note the effects of round-off errors by substituting your
answers back into the equations. In which problem are the round-off errors
worse? Why? What can be done to rninirnize them?

20. Solve 19(a)using an iterative scheme with x(o)= [l, 1, 1IT. What is your answer
after five iterations?
21. Determine if the following are linearly independent or linearly dependent.

22. Can the vector u

23.
217
=
I:[1 be written as a linear combination of those in problem

Do the vectors in problem 21 span V3?


24. Let P2 be the vector space consisting of all polynomials of degree I2 and the
zero polynomial. Let X1 = x2 + 2x + 1 and X2 = x2 +2. Does (XI, X2) span
Pz?
25. + +
Consider the set S = [9 1, x - 1, 2x 2). IS it a basis of P2?
26. Show that every vector in Vn can be uniquely represented as a linear combination
of a basis for Vn.
27. Prove that a set of nonzero vectors is linearly dependent if and only if one of
the vectors is a linear combination of the others.
Elementary Mathematical Concepts

2.7 References
ctor and Tensor Analysis, McGraw-Hill Book Co. Inc., New York

: Matrix Methods-An Introduction, Academic Press, New York City

.N.: The Computational Methods of Linear Algebra, Dover Publica-


tions, Inc., New York City (1959).
4. Varga, R.S.: Matrix iterative Analysis, Prentice-Hall, Inc., Englewood Cliffs (1962).
5. Kolman, B.: Elementary Linear Algebra, Macmillan, Inc., New York City (1970).

Potrebbero piacerti anche