Sei sulla pagina 1di 90

Table of Contents

1) Linear Systems

1-36

2) Eigen Value Problems and Diagonalization

37-80

References

1) Calculus and analytic geometry, Thomas, Addison Wesley, Fourth Edition.

2)

فراعملا راد , فيض مصاع , لماكتلاو لضافتلا باسح

3)

ولجنلأا ةبتكم , موخاب بيجن , مولعلاو ةسدنهلل تايضايرلا

ةيرصملا

MATRIX ANALYSIS

Prof. Dr. / Hazem Ali Attia

Department of Engineering Mathematics and Physics,

Faculty of Engineering, Fayoum University

2010 / 2011

MATRIX ANALYSIS

1

Matrices

A matrix is a rectangular array of numbers (or functions) enclosed in

brackets. These numbers (functions) are called the entries or elements of

the matrix. For example

0.3

0

(Re tan

c

1

0.2

2 3

5

16

gular matrix

)

,

a

a

a

11

21

a

a

12

22

a

31 32

3 3

a

a

a

13

23

33

(

)

Square matrix

[

(

a

1

a

2

a

3

]

1 3

)

Row vector

,

(

4

1/ 2

2 1

Column vector

)

,

(

e


e

x

6 x

2

2

2

x

4

x

2

)

Square matrix

The second matrix has entries with two indices

( a

ij

) with i corresponds

to the row and j corresponds to the column.

A Major Application of Matrices

In a system of linear equations, briefly called a linear system such as:

4 x

6

x

1

1

6

x

2

2

x

3

9

3

x

3

x

6

20

10

The coefficients of the unknowns

x

1 , x

2 , x

3 are the entries of the

coefficient matrix A.

5

x

1

8

x

2

 

~

The matrix

A

is obtained by augmenting A by the right side of the linear

system and is called the augmented matrix.

A

~

A

4

6

5

6

0

8

9

2

1

,

~

A

4

6

5

6

0

8

9

1

6

2 20

10

contains all information about the solution of the system.

2

Notations: We denote matrices by capital letters

A,B,C

or by writing the

general entry in brackets, thus

it has m rows and n columns. If m=n we have a square matrix where its

diagonal containing the entries

is called the main diagonal.

matrix, we mean

A

[

a

jk

]

.

By an

m n mn

a

11

,

a

22

,

,

a

nn

A

of the matrix.

matrix that is not square is called rectangular.

is called the size

A

components of the vector. We denote vectors by lower case letters

vector is matrix with only one row or column. Its entries are called the

a,b,

or

by

a

[

a

j

]

.

A general row vector is

a

[

a

1

a

2

a

b

b

.

1

b

2

b

m

is a general column vector.

n

]

,

Definition: Equality of Matrices

Two matrices

only if they have the same size and the corresponding entries are equal.

A [

a

jk

]

and

B [

b

jk

]

are equal, written as

A B

, if and

Example:

A

a

a

11

21

a

a

12

22

and

B

4

3

0

1

then

A B a

11

4,

a

12

0,

a

21

3, a

22

 1

The following matrices are all different,

1

4

3 2   

,

   4 3

2

1

     1 4

,

3

2

0 0   

,

   0 0

1

4

3

2

Definition: Addition of Matrices

3

The sum of two matrices

A

[

a

jk

]

and

B

[

b

jk

]

of the same size is

written as

A B

and has the entries

a

jk

b

jk

obtained by adding the

corresponding entries of A and B. added.

Matrices of different size can not be

Example: For the matrices:

A

 4 6 3

0

1

2

,

B

5

0

1

1

0

0

A

B

1

0

5

2

3

2

a 5 7 2, b 6 2 0a b 1 9 2

Definition: Scalar Multiplication

The product of any

mn

matrix

A

[ a

jk

] and any scalar c (number c)

is written as cA and

multiplying each entry of A by c.

is

the

mn

matrix

cA [ ca

jk

] obtained by

Here, (-1)A or simplify -A and is called the negative of A. Also,

and is called the difference of A

and B (which must have the same size).

(k)A kA

. Also,

A (B) A B

Example:

A

2.7

9.0

0

1.8

4.5

0.9

A

 

  2.7

0

9

1.8

0.9

4.5

,

0

A

0 0

0

0

0

0

(zero matrix)

Rules for Matrix Addition and Scalar Multiplication

Similar to numbers we have rules for addition of matrices of the same

size

mn

(a)

A B B A,

(b)

(A B) C A (B C) A B

C

,

(c)

A 0 A,

(d)

A (A) 0

(0 denotes the zero matrix).

4

By (a) and (b) matrix addition is commutative and associative. For scalar multiplication:

(a)

c(A B) cA cB

,

(b)

(c k)A cA kA

,

(c)

c(kA) (ck)A,

(d)

1A A

.

Definition: Matrix Multiplication

The product

C AB

(in this order) of an

r p

matrix

matrix

C

[

c

jk

B

[

b

jk

]

is defined

]

with entries

c jk

n

1

a

j

b

k

a

j

1

b

1

k

a

j

2

b

2

k

 

a

jn

b

nk

,

mn

r n

j

1,

matrix

and

,

m

,

is

k

A

[

a

jk

]

times an

then the

m p

1,

,

p

(1)

r=n means that number of columns of A must equal number of rows of B.

A

mxn

B

nxp

C

mxp

c jk

is obtained by multiplying each entry in the jth row of A by the

corresponding entry in the kth column of B and then adding these n products. Briefly, “multiplication of rows into columns” and then (1) can

be written compactly as,

c

jk

a b

j

k

,

j

1,

,

m

,

k

1,

,

p

a

j

is the jth row of A and

a

a

a

a

11

21

31

41

4

a

a

a

a

12

22

32

42

3

a

a

a

a

13

23

33

43

Example:

b

b

b

11

21

31

3

2

b

b

b

12

22

32

b

k

4

c

c

c

c

11

21

31

41

is the kth column of B.

2

c

c

c

c

12

22

32

42

(m=4, n=3, p=2)

5

AB

 

3

4

6

5

0

3

1   2  

5

    9

2

2

2

0

4

3

7

1

1

1

8

22

26

  9

The product BA is not defined.

Example:

Example:

4

1

3

2

8

3

 

5

22

43

,

6

1

1

4  

2

19 ,

3     4

 

5

1

1

4  

2

3

2

8

6

2

16

4

43

14

37

42

28

6

is undefined

1

12

3

6

6

12

24

1

2

4

Caution! Matrix multiplication is not in general commutative, It also holds for square matrices.

1

100

1      1

100

 

1

1  

1

  0 0

 

0

0

but

  1

 

1

1   

  

1

1 1

100  

100

99

99

AB BA

99

99

.

It is interesting that this also shows that AB=0 does not necessary imply BA=0, A=0 or B=0.

Rules for Matrix Multiplication: (assuming that the multiplication on the left is defined)

(a)

(kA)B k(AB) A(kB)

written as (kAB or

AkB)

(b)

A(BC) (AB)C

written as (ABC )

(c)

C(A B) CA CB

(d)

(A B)C AC BC

where (b) is called the associative law, c) and d) are called the distributive laws.

6

Transposition: The transpose of an

mn

matrix

A

[

a

jk

]

is the

nm

matrix

A

T

[

A

a

kj

T

]

that has the first row of A as its first column and so on, thus

A

T

[

a

kj

]

a

a

a

11

12

13

a

a

a

21

22

23

a

a

a

m

m

m

1

2

3

Transposition converts row vectors to columns vectors and conversely.

Example:

A

5

4

8

0

1

0

5

1

A   8

T

4

0

0

,

6

2

3

T

6

3  

2

,

3

8

0

1

T

3

0

8

1

Note that for square matrices, the transpose is obtained by interchanging entries that are symmetrically positioned with respect to the main

diagonal, e.g.

a

12

and

a

21

and so on.

Rules for Transposition

(a)

(c)

( A

T

)

(cA)

T

T

A

,

cA

T

,

(b)

(AB)

(d)

(AB)

T

T

A

B

T

T

A

B

T

.

T

,

Special Matrices: Symmetric and skew symmetric matrices

They are square matrices whose transpose equals the matrix itself or the minus of the matrix, respectively

A

T

or

A

T

A

(symmetric

a

kj

a

jk

),



A

(skew symmetric

Example:

a

jk

a

kj

7

, hence

a

jj

0

)

A

20

120

  200

120

10

150

symmetric

200

150

30

,

0

3

 

B  

1

1

0

2

3

  

2

0

shewsymmetric

Triangular Matrices: Upper triangular matrices are square matrices that can have non-zero entries only on and above the main diagonal, whereas any entry below the diagonal is zero. Similarly, lower triangular matrices can have non-zero entries only on and below the main diagonal. Any entry on the main diagonal of a triangular matrix may be zero or not.

Example:

1 3

0

2

U

.

T

.

,

1

0

0

4

3

0

U .

T

.

2

2

6

,

2

8

7

0

6

1

LT

.

.

0

0

8

,

3

9

1

1

0

3

0

9

LT

.

.

0

0

2

3

0

0

0

6

Diagonal Matrices: These are square matrices that can have non-zero entries only on the main diagonal. If all of the diagonal entries of a diagonal matrix S are equal, say, c, we call S a scalar matrix because multiplication of any square matrix A of the same size as S has the same

effect as the multiplication by a scalar;

AS SA cA

If a scalar matrix has its entries on the diagonal all equal one it is called a

unit matrix (or identity matrix) denoted by

I

;

AI IA A

Example:

D

2

0

0

0

3

0

0

0

0

, S

c

0

0

0

c

0

0

0

c

, I

1

0

0

0

1

0

0

0

1

Linear System: A linear system of m equations in n unknowns

is a set of equations of the form,

a x

11

1

a

21

x

1

a

12

x

2

a

22

x

2

a

1

a

n

2

x

n

n

x

n

b

1

b

2

(1)

x

1

,

, x

n

8

 

a

m

1

x

1

a

m

2

x

2

a

mn

x

n

b

m

The system is called linear because each variable

x

j appears in the first

power only.

a

11

,

,a

mn

are given numbers called coefficients of the

system.

called homogeneous, otherwise it is called non-homogenous. The solution of the system is the set of n numbers that satisfies all the m

whose

equations. A solution vector

components form a solution of (1). If the system (1) is homogenous, it

has at least the trivial solution;

b

1

,

,b

m are given numbers. If all the b’s are zero, the system is

(

x

1

,

x

1

0,

, x

n

)

, x

n

of (1)

0

is

a vector

X

Matrix form of the linear system (1): From the definition of matrix multiplication, (1) can be written as AX b

where

A

a

a

 

a

11 a

a

21

a

m

1

1 n

2 n

mn

m

n


x

1


.

.

x

n 1

b

 

,

  

, X  

1


.

b

m

m

b

n

1

A is the coefficient matrix, X is the solution vector, and b is the right side.

The augmented matrix is defined as;

~

A

a

a

a

11

21

m 1

a

a

a

1

2

n

n

mn

b

b

b

1

2

m

~

A

determines the system (1) completely because it contains all the given

numbers appearing in (1).

Example: Geometric Interpretation

9

If

a

11

m n 2

x

1

a

12

x

, we have 2 equations in two unknowns

2

b

1

,

a

21

x

1

a

22

x

2

b

2

x

1

and

x

2

;

If

represents a straight line and

with coordinates

x

1

and

x

2

are coordinates in the xy plane, each of the two equations

is a solution if and only if the point

( x

1

,

x

2

)

( x

1

,

x

2

)

lies on both lines and we have 3 possibilities:

(a)

One solution; if the lines intersect.

(b)

Infinitely many solutions; if the lines coincide.

(c)

No solution; if the lines are parallel.

x x 1  x 2  1 2 2 x  0 2/3 p
x
x 1 
x
2  1
2 2 x
 0
2/3
p
 x
1
2
1/3
x 1

(a)

x x  x  1 1 2 2 2 x  2  2
x
x
x
 1
1
2
2 2 x
2
2
1
x 2
x 1

(b)

x x 1  x 2  1 2 x 1  x 2 
x
x 1  x
2  1
2
x 1 
x
2  0
x 1

(c)

Gauss Elimination and Back Substitution

It is a method of great practical importance and is reasonable with respect to computing time and storage demand. If a system is in “triangular form“, say:

2 x

1

5

13

x

2

2

x 

2

26

we can solve it by “back substitution”, that is solve the last equation for

into the first

x  2

2

and then work

backward,

substituting

x

2

 2

10

equation and solve for

x

1

1

2

(2

x

5

2

)

6

.

This gives the idea of first

reducing a general system to a triangular form.

Now let we have

2 x

1

4

x

1

5

x

2

3

x

2

2



30

 

2

4

5

3 30

2

R

2

2

R

1

R

2

2

0

5

2

13 26

This is the triangular form and by back substitution we get

x 

2

2,

x

1

6

.

Gauss

considering the matrices.

Example

Solve the linear system;

x

1

x x

2

3

0

(Node p ),

x x x

1

2

3

10

x

2

25

x

3

20

x

1

10

x

2

0

(Node Q),

90

(R. Loop),

80

(L. loop )

elimination (GE) can be done by merely 20 Q 10 i1 80v 10 i 90v
elimination
(GE)
can
be
done
by merely
20
Q 10
i1
80v
10
i
90v
3
p
i
2
15

Elimination of

x 1 Changing the order

 1  1 1 0   1  1   1 1
 1
 1
1
0 
 1
 1
1
1
1
0
R
R
R
0
0
2
1
2
0
10
25
90
R
20 R
R
0 10
4
1
4
20
10
0
80
0 30
x
2
Elimination of
 1
 1
1
0
0
10
25
90
R
3 R
R
3
2
3
0
0
 95
 190
0
0
0
0 

1

0

25

20

0

0

 

90

80

R

R

2

4

R

R

3

3

1

10

1

0

0

30

  0

0

1

25

20

0

0

90

80

0

Using back substitution we get

11

95

190

10

x

1

x

2

x

3



x

x

25

2

3

x

3

90

0

x

x

x

3

2

1

2

4

2

one solution exists

Elementary Row Operations for Matrices (Equations)

- Interchange of two rows (equations).

- Addition of a constant multiple of one row (equation) to another row (equation).

- Multiplication of a row (equation) by a non-zero constant.

Caution! We are dealing with row operations. No column operations on the augmented matrix are permitted because they alter the solution set.

Theorem: Row equivalent linear systems have the same set of solutions. Therefore, systems having the same solution sets are often called equivalent systems.

Example: GE if infinitely many solutions exist

Solve the system

(3)

0.6

1.2

R

3

2

1.5

0.3

R

2

2

1.5

0.3

3

0

0

2

1.1

0

5

8

2.4

2

1.1

0

R

R

5.4 2.7

2.1

2

3

8

0

5

4.4 1.1

0

0.2

0.4

R

R

1

1

3

1

0

2

1.1

1.1

2

1.1

1.1

5

4.4

4.4

8

1.1

1.1

3

x

1

2

x

2

2

x

3

5

x

4

8, 1.1

x

2

1.1

x

3

4.4 x

4

1.1

Back substitution from the last equation,

first equation and using

x

2

x

1

2 x

4

.

x

2

1x 4x

3

4 and from the

Since

x ,

3

x

4

remain arbitrary,

12

we have infinitely many solutions.

and may be denoted by

x

3

and

x

1

t

1

and

t

2

and then

Example: GE if no solution exists

x

4 are called free variables

2 t , x 1t

2

2

1

4t

2

.

GE will show this case by producing a contradiction

Solve the system

2 1 3  2  3 2 1   3  R 
2
1
3
2
 3
2
1
  3
 R
R
2
1
1
0
2
3 1
0
1 3
1 3
 
2
4
 
R
 2
R
2
2
  6
6 
3
1
  0
The
false
statement
0=12
(inconsistent system).

3

2

0

3

 

   0

  

R

3

6 R

2

0

2 1 1 3 1 3 0 0
2
1
1 3
1 3
0
0

3  

12

2  

3 x  2 x  x  3 1 2 3 (1 3) x
3
x
2
x
x
3
1
2
3
(1 3)
x
(1 3)
x

2
2
3
0
 12

shows

that

the

system

has

no

solution.

Row Echelon Form and Information from it:

At the end of the GE the form of the coefficient matrix, and the system itself are called the row echelon form. In it, rows of zeros, if present, are the last rows and in each non-zero row the left most non-zero entry is farther to the right than in the previous row. Note that we do not require that the left most non-zero entries be one since this would have no theoretic or numeric advantage (if the entries are 1, the form is called reduced echelon form)


a

11

0

0

0

0

a

c

12

a

1 n

22 c

2 n

0

0

0

k

rr

0

0

k

rn

0

0


m

~

b

1

~

b

2

2

~

b

~

r

b

r

1

~

b

Here, r m and

and rectangle are zeros. We have three possible cases;

a

11

0, c

22

0,

,

k

rr

0

and all the entries in the triangle

13

(a) Exactly one solution: r n and

the solution, solve the nth equation equation and so on.

~

b

r 1

k

nn

,

x

n

~

, b

m , if present are zeros. To get

~

b

n

for

x

n

, and then the

(n 1)

th

(b) Infinitely many solutions: If

r n

,

~

b

r

1

,

~

, b

m

obtain any of these solutions choose values for

, if present are zeros. To

x

r

1

,

,

x

n

arbitrarily, then

solve the rth equation for so on.

x

r

, then solve the (r-1)th equation for

x

r1