Sei sulla pagina 1di 16

“"Great thoughts speak only to the thoughtful

mind, but great actions speak to all mankind."


…Emily P. Bissell
CHAPTER

1 Linear Algebra

Learning Objectives
After reading this chapter, you will know:
1. Matrix Algebra, Types of Matrices, Determinant
2. Cramer’s rule, Rank of Matrix
3. Eigenvalues and Eigenvectors

Matrix

Definition
A system of “mn” numbers arranged along m rows and n columns. Conventionally, A matrix is
represented with a single capital letter.
a11 a12 − − a1j − − a1n
a21 a22 − − a2j − − a2n
Thus, A = [ − − − − aij − − ain ]
am1 am2 − − − − − amn
A = (aij )
m×n
aij → ith row, jth column
Principle diagonal, Trace transpose

Types of Matrices
1. Row and Column Matrices
 Row Matrix → [ 2 7 8 9] → A matrix having single row is row matrix or row vector
5
10
 Column Matrix → [ ] → Single column (or column vector)
13
1
2. Square Matrix
 Number of rows = Number of columns
 Order of Square matrix → No. of rows or columns
1 2 3
Example: A = [5 4 6]; Order of this matrix is 3
0 7 5
 Principal Diagonal (or Main Diagonal or Leading Diagonal)
The diagonal of a square matrix (from the top left to the bottom right) is called as principal
diagonal.

info@thegateacademy.com ©Copyright reserved. Web:www.thegateacademy.com 1


Linear Algebra

 Trace of the Matrix


The sum of the diagonal elements of a square matrix.
- tr (λ A) = λ tr(A) [ λ is scalar]
- tr ( A+B) = tr (A) + tr (B)
- tr (AB) = tr (BA)

3. Rectangular Matrix: Number of rows ≠ Number of columns.

4. Diagonal Matrix: A square matrix in which all the elements except those in leading diagonal are
zero.
−4 0 0
Example: [ 0 6 0]
0 0 8

5. Scalar Matrix: A Diagonal matrix in which all the leading diagonal elements are same.
2 0 0
Example: [0 2 0]
0 0 2

6. Unit Matrix (or Identity Matrix): A Diagonal matrix in which all the leading diagonal elements
are ‘1’.
1 0 0
Example: I3 = [0 1 0]
0 0 1

7. Null Matrix (or Zero Matrix): A matrix is said to be Null Matrix if all the elements are zero.
0 0 0
Example: [ ]
0 0 0

8. Symmetric and Skew Symmetric Matrices


 For symmetric, aij = aji for all i and j. In other words AT = A
Note: Diagonal elements can be anything.
 Skew symmetric, when aij = −aji In other words AT = − A
Note: All the diagonal elements must be zero.
Symmetric Skew symmetric
a h g 0 −h g
[h b f ] [h 0 −f]
g f c −g f 0
Symmetric Matrix AT = A Skew Symmetric Matrix AT = − A
9. Triangular matrix
 A square matrix is said to be “upper triangular” if all the elements below its principal
diagonal are zeros.
 A square matrix is said to be “lower triangular” if all the elements above its principal
diagonal are zeros.
a h g a 0 0
[0 b f ] [g b 0]
0 0 c f h c
Upper Triangular Matrix Lower Triangular Matrix
info@thegateacademy.com ©Copyright reserved. Web:www.thegateacademy.com 2
Linear Algebra

10. Orthogonal Matrix: If A. AT = I, then matrix A is said to be Orthogonal matrix.

11. Singular Matrix: If |A| = 0, then A is called a singular matrix.

12. Conjugate of a Matrix: Transpose of a conjugate.

13. Unitary matrix: A complex matrix A is called Unitary if A−1 = AT


Example: Show that the following matrix is unitary
1 1+i 1−i
A= [ ]
2 1−i 1+i
Solution: Since
1 1+i 1−i 1 1−i 1+i 1 4 0
AAT = [ ]× [ ]= [ ]=I
2 1−i 1+i 2 1+i 1−i 4 0 4
We conclude AT = A−1 . Therefore, A is a unitary matrix

14. Hermitian Matrix: It is a square matrix with complex entries which is equal to its own conjugate
transpose.
Aθ = A or aij = a̅ij
5 1−i
For example: [ ]
1+i 5
Note: In Hermitian matrix, diagonal elements → Always real

15. Skew Hermitian Matrix: It is a square matrix with complex entries which is equal to the negative
of conjugate transpose.
Aθ = −A or aij = − a̅ji
5 1−i
For example = [ ]
1+i 5
Note: In Skew-Hermitian matrix, diagonal elements → Either zero or Pure Imaginary.

16. Idempotent Matrix: If A2 = A, then the matrix A is called idempotent matrix.

17. Involuntary matrix: A2 = I

18. Nilpotent Matrix : If Ak = 0 (null matrix), then A is called Nilpotent matrix (where k is a +ve
integer).

19. Periodic Matrix : If Ak+1 = A (where, k is a +ve integer), then A is called Periodic matrix.
If k =1 , then it is an idempotent matrix.

20. Proper Matrix : If |A| = 1, matrix A is called Proper Matrix.

Equality of Matrices
Two matrices can be equal if they are of
(a) Same order
(b) Each corresponding element in both the matrices are equal

info@thegateacademy.com ©Copyright reserved. Web:www.thegateacademy.com 3


Linear Algebra

Addition and Subtraction of Matrices


a1 b1 a b2 a ± a2 b1 ± b2
[ ]± [ 2 ] =[ 1 ]
c1 d1 c2 d2 c1 ± c2 d1 ± d2

Rules
1. Matrices of same order can be added
2. Addition is cummulative →A+B = B+A
3. Addition is associative →(A+B) +C = A+ (B+C) = B + (C+A)

Multiplication of Matrices
Condition: Two matrices can be multiplied only when number of columns of the first matrix is equal
to the number of rows of the second matrix. Multiplication of (m × n) and (n × p) matrices results in
m×n
matrix of (m × p)dimension [ n × p = m × p]
Properties of multiplication
1. Let Am×n , Bp×q then ABm×q exists ⇔ n = p
2. BAp×n exists ⇔ q
3. AB ≠ BA
4. A(BC) = (AB)C
5. AB = 0 need not imply either A = 0 or B = 0
Multiplication of Matrix by a Scalar: Every element of the matrix gets multiplied by that scalar.

Determinant
An nth order determinant is an expression associated with n × n square matrix.
If A = [aij ], Element aij with ith row, jth column.
a11 a12
For n = 2, D = det A = |a | = (a11 a22 − a12 a21)
21 a 22

Determinant of “Order n”
a11 a12 a13 − − a1n
a21 − − − − a2n
D = |A| = det A = || − − − − − − ||
− − − − − −
an1 an2 − − − ann

Minors & Cofactors


 The minor of an element is the determinant obtained by deleting the row and the column which
intersect that element.
a1 a2 a3
A=[ 1 b b 2 b3]
c1 c2 c3
b b3
Minor of a1 = | 2 |
c2 c3

 Cofactor is the minor with “proper sign”. The sign is given by (−1)i+j (where the element
belongs to ith row, jth column).

info@thegateacademy.com ©Copyright reserved. Web:www.thegateacademy.com 4


Linear Algebra

b b3
A2 = Cofactor of a2 = (−1)1+2 × | 1 |
c1 c3
A1 A2 A3
Cofactor matrix can be formed as | B1 B2 B3 |
C1 C2 C3
In General
 ai Aj + bi Bj + ci Cj = ∆ if i = j
 ai Aj + bi Bj + ci Cj = 0 if i ≠ j
{ai , bi , ci are the matrix elements and Ai , Bi , Ci are corresponding cofactors. }
Note: Singular matrix: If |A| = 0 then A is called singular matrix.
Nonsingular matrix: If |A| ≠ 0 other A is called Non − singular matrix.

Properties of Determinants
1. A determinant remains unaltered by changing its rows into columns and columns into rows.
a1 b1 c1 a1 a2 a3
| a2 b2 c2 | = | b1 b2 b3 | i. e. , det A = det AT
a3 b3 c3 c1 c2 c3
2. If two parallel lines of a determinant are inter-changed, the determinant retains it numerical
values but changes in sign. (In a general manner, a row or column is referred as line).
a1 b1 c1 a1 c1 b1 c1 a1 b1
| a2 b2 c2 | = − | a2 c2 b2 | = | c2 a2 b2 |
a3 b3 c3 a3 c3 b3 c3 a3 b3
3. Determinant vanishes if two parallel lines are identical.
4. If each element of a line be multiplied by the same factor, the whole determinant is multiplied
by that factor. [Note the difference with matrix].
a1 Pb1 c1 a1 b1 c1
P | a2 Pb2 c2 | =| a2 b2 c2 |
a3 Pb3 c3 a3 b3 c3
5. If each element of a line consists of the m terms, then determinant can be expressed as sum of
the m determinants.
a1 b1 c1 + d1 − e1 a1 b1 c1 a1 b1 d1 a1 b1 e1
| 2a b2 c 2 + d2 − e 2| = | 2a b2 c 2| + a
| 2 b2 d2| − | 2 b2 e2 |
a
a3 b3 c3 + d3 − e3 a3 b3 c3 a3 b3 d3 a3 b3 e3
6. If each element of a line be added equi-multiple of the corresponding elements of one or more
parallel lines, determinant is unaffected.
Example: By the operation, R 2 → R 2 + pR1 +qR 3 , determinant is unaffected.
7. Determinant of an upper triangular/ lower triangular/diagonal/scalar matrix is equal to the
product of the leading diagonal elements of the matrix.
8. If A & B are square matrix of the same order, then |AB|=|BA|=|A||B|.
1
9. If A is non-singular matrix, then |A−1 | = |A|.
10. Determinant of a skew symmetric matrix (i.e., AT =−A) of odd order is zero.
11. If A is a unitary matrix or orthogonal matrix (i.e., AT = A−1 ) then |A|= ±1.
12. If A is a square matrix of order n then |k A| = k n |A|.
13. |In | = 1 ( In is the identity matrix of order n).

info@thegateacademy.com ©Copyright reserved. Web:www.thegateacademy.com 5


Linear Algebra

Multiplication of Determinants
 The product of two determinants of same order is itself a determinant of that order.
 In determinants we multiply row to row (instead of row to column which is done for matrix).

Comparison of Determinants & Matrices


Although looks similar, but actually determinant and matrix is totally different thing and its
technically unfair to even compare them. However just for reader’s convenience, following
comparative table has been prepared.

Determinant Matrix
No. of rows and columns are always equal No. of rows and column need not be same
(square/rectangle)
Scalar Multiplication: Elements of one line Scalar Multiplication: All elements of matrix is
(i.e., one row and column) is multiplied by multiplied by the constant
the constant
Can be reduced to one number Can’t be reduced to one number
Interchanging rows and column has no Interchanging rows and columns changes the
effect meaning all together
Multiplication of 2 determinants is done by Multiplication of the 2 matrices is done by
multiplying rows of first matrix & rows of multiplying rows of first matrix & column of
second matrix second matrix

Transpose of Matrix
Matrix formed by interchanging rows & columns is called the transpose of a matrix and denoted
by AT.
1 2
1 5 4
Example: A = [5 1] Transpose of A= Trans (A)= A′ = AT = [ ]
2 1 6
4 6
Note:
1 1
 A = (A + AT ) + (A − AT ) = symmetric matrix + skew-symmetric matrix.
2 2
 If A & B are symmetric, then AB+BA is symmetric and AB−BA is skew symmetric.
 If A is symmetric, then An is symmetric (n=2, 3, 4…….).
 If A is skew-symmetric, then An is symmetric when n is even and skew symmetric when n is
odd.

Adjoint of a Matrix
Adjoint of A is defined as the transposed matrix of the cofactors of A. In other words,
Adj (A) = Trans (cofactor matrix)
a1 b1 c1 a1 b1 c1
Determinant of the square matrix A = [ a2 b2 c2 ] is ∆ = |a2 b2 c2 |
a3 b3 c3 a3 b3 c3
The matrix formed by the cofactors of the elements in A is
A1 B1 C1
[ A2 B2 C2 ] → Also called as cofactor matrix
A3 B3 C3

info@thegateacademy.com ©Copyright reserved. Web:www.thegateacademy.com 6


Linear Algebra

A1 A2 A3
Then transpose of [ B1 B2 B3 ] = Adj (A)
C1 C2 C3

Inverse of a Matrix
Adj A
 A−1 = |A|
 |A| must be non-zero (i.e. A must be non-singular).
 Inverse of a matrix, if exists, is always unique.
a b 1 d −b
 If it is a 2 × 2 matrix [ ] , its inverse will be ad−bc [ ]
c d −c a

Drill Problem: Prove (A B)−1 = B −1 A−1


Proof: RHS = (B −1 A−1 )
Pre-multiplying the RHS by AB, (A B) (B −1 A−1 ) = A (B. B −1 ) A−1 = I
Similarly, Post-multiplying the RHS by AB, (B −1 A−1 ) (A B) = B −1 (A−1 A) B= B −1 B = Ι
Hence, AB & B −1 A−1 are inverse to each other

Important Points
1. IA = AI = A, (Here A is square matrix of the same order as that of I )
2. 0 A = A 0 = 0, (0 is null matrix)
3. If AB = 0, then it is not necessarily that A or B is null matrix.
Also it doesn’t mean BA = 0
2 2 2 −2 0 0
Example: AB = [ ]×[ ]=[ ]
2 2 −2 2 0 0
4. If the product of two non-zero square matrix A & B is a zero matrix, then A & B are singular
matrix.
5. If A is non-singular matrix and A B=0, then B is null matrix.
6. AB ≠ BA (in general) → Commutative property is not applicable
7. A(BC) = (A B)C → Associative property holds.
8. A(B+C) = AB+ AC → Distributive property holds.
9. AC = AD , doesn’t imply C = D [Even when A ≠ 0].
10. (A + B)T = AT + B T
11. (AB)T = B T AT
12. (AB)−1 = B −1 A−1
13. A A−1 = A−1 A = I
14. (kA)T = k AT (k is scalar, A is vector)
15. (kA)−1 = k −1 A−1 (k is scalar, A is vector)
16. (A−1 )T = (AT )−1
17. (̅̅̅̅
AT ) = (A̅ )T (Conjugate of a transpose of matrix = Transpose of conjugate of matrix)
18. If A non-singular matrix A is symmetric, then A−1 is also symmetric.
19. If A is a orthogonal matrix , then AT and A−1 are also orthogonal.
20. If A is a square matrix of order n then
(i) |adj A|=|A|n−1
2
(ii) |adj (adj A)|=|A|(n−1)
(iii) adj (adj A) =|A|n−2 A

info@thegateacademy.com ©Copyright reserved. Web:www.thegateacademy.com 7


Linear Algebra

Example: Demonstrate by example that AB ≠ BA


1 3 1 −4
Solution: Suppose, A = [ ], B= [ ]
−2 0 2 5
7 −11 −7 3
AB = [ ] , BA = [ ]
−2 8 12 8

Example: Demonstrate by example that AB = 0 ⇏ A = 0 or B = 0 or BA = 0


1 1 −1 1 0 0
Solution: AB = [ ][ ] =[ ]
2 2 1 −1 0 0
1 1
BA = [ ]
−1 −1

Example: Demonstrate that AC = AD ⇏ C = D (even when A ≠ 0)


1 1 2 1 4 3
Solution: AC = [ ][ ] =[ ]
2 2 2 2 8 6
1 1 3 0 4 3
AD = [ ][ ] =[ ]
2 2 1 3 8 6
Although AC = AD, but C ≠ D

Example: Write the following matrix A as a sum of symmetric and skew symmetric matrix
1 2 4
A = [−2 5 3]
−1 6 3
1 1 1 2 4 1 −2 −1
Solution: Symmetric matrix = (A +AT ) = {[−2 5 3] + [2 5 6 ]}
2 2
−1 6 3 4 3 3
2 0 3 1 0 3/2
1
= [0 10 9] = [ 0 5 9/2]
2
3 9 6 3/2 9/2 3
0 2 5/2
1
Skew symmetric matrix = (A −AT) = [ −2 0 −3/2]
2
−5/2 3/2 0

Example: Check, whether the following matrix A is orthogonal.


1 1 2 2
A = [ 2 1 −2]
3
−2 2 −1
1 1 2 −2
𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧: A′ = AT = [2 1 2]
3
2 −2 −1
1 9 0 0 1 0 0
A × AT = A × A′ = [0 9 0] = [0 1 0] = I
9
0 0 9 0 0 1
Hence A is orthogonal matrix

Elementary Transformation of Matrix


1. Interchange of any 2 lines
2. Multiplication of a line by a constant (E.g.: k R i )
3. Addition of constant multiplication of any line to the another line (E.g.: R i + p R j )

info@thegateacademy.com ©Copyright reserved. Web:www.thegateacademy.com 8


Linear Algebra

Note:
 Elementary transformations doesn’t change the rank of the matrix.
 However it changes the Eigen value of the matrix.
 We call a linear system S1 “Row Equivalent” to linear system S2, if S1 can be obtained from S2 by
finite number of elementary row operations.

Gauss-Jordan Method for Finding Inverse


Elementary row transformations which reduces a given square matrix A to the unit matrix, when
applied to unit matrix I, gives the inverse of A
−1 1 2
Example: Find the inverse of [ 3 −1 1]
−1 3 4
−1 1 2 : 1 0 0
Solution: Write in the form, [ 3 −1 1 : 0 1 0]
−1 3 4 : 0 0 1
R 2 → R 2 + 3R1 −1 1 2 ∶ 1 0 0
Operate [ 0 2 7 ∶ 3 1 0]
R 3 → R 3 − R1
0 2 2 ∶ −1 0 1
−1 1 2 : 1 0 0
Operate R 3 → R 3 − R 2 [ 0 2 7 : 3 1 0]
0 0 −5 : −4 −1 1
R1 → −R1 −1 −1 −2 : −1 0 0
R 2 → R 2 /2 [ 0 1 7/2 : 3/2 1/2 0 ]
R 3 → −R 3 /5 0 0 1 : 4/5 1/5 −1/5
7R 3 1 −1 0 : 3/5 2/5 −2/5
R2 → R2 −
2 [0 1 0 : −13/10 −1/5 7/10 ]
R1 → R1 + 2R 3 0 0 1 : 4/5 1/5 −1/5
1 0 0 : −7/10 2/5 3/5
R1 → R1 + R 2 [0 1 0 : −13/10 −1/5 7/10 ]
0 0 1 : 4/5 1/5 −1/5

Rank of Matrix
If we select any r rows and r columns from any matrix A, deleting all other rows and columns, then
the determinant formed by these r×r elements is called minor of A of order r.

Definition: A matrix is said to be of rank r when,


i) It has at least one non-zero minor of order r.
ii) Every minor of order higher than r vanishes.
Other definition: The rank is also defined as maximum number of linearly independent row vectors.

Special Case: Rank of Square Matrix


Rank = Number of non-zero row in upper triangular matrix using elementary transformation.
Note:
1. r(A.B) ≤ min {r(A), r (B)}
2. r(A+B) ≤ r(A) + r (B)
3. r(A−B) ≥ r(A)− r (B)
4. The rank of a diagonal matrix is simply the number of non-zero elements in principal diagonal.
info@thegateacademy.com ©Copyright reserved. Web:www.thegateacademy.com 9
Linear Algebra

5. A system of homogeneous equations such that the number of unknown variables exceeds the
number of equations, necessarily has non-zero solutions.
6. If A is a non-singular matrix, then all the row/column vectors are independent.
7. If A is a singular matrix, then vectors of A are linearly dependent.
8. r(A)=0 iff (if and only if) A is a null matrix.
2 3 −1 −1
1 −1 −2 −4
Example: Find rank of [ ]
3 1 3 −2
6 3 0 −7
Solution:
1 −1 −2 −4 R − 2R 1 −1 −2 −4
2 1
2 3 −1 −1 R − 3R 0 5 3 7
R1 ↔ ~ [ ] 3 1 ~ [ ]
3 1 3 −2 0 4 9 10
R 4 − 6R1
6 3 0 −7 0 9 12 17
−2 −4
3 7 −2 −4
1 −1 1 −1
3 7
4 9 0 5 33 22 0 5 33 22
R3 − R2, R4 − R2~ R4 − R3 ~
5 5 0 0 5 5 0 0
0 0 33 22
[0 0 5 5
0 0]
[ 5 5 ]
Hence Rank = 3 (Number of Non zero rows)

−1 0 0 0 0 0
0 0 0 0 0 0
Example: The rank of a diagonal matrix 0 0 1 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
[ 0 0 0 0 0 4]
(A) 1 (B) 2 (C) 3 (D) 4
Solution: Option (C) is correct as the number of non-zero elements in the diagonal matrix gives the
rank.

2 −4 6
Example: Rank of matrix [−1 2 −3] is
3 −6 9
(A) 3 (B) 2 (C) 0 (D) 1
Solution: By doing R 3 → R 3 + 3R 2 and R1 → R1 + 2R 2 ,
0 0 0 −1 2 −3
we get [−1 2 −3] R1 ⟷ R 2 [ 0 0 0 ]
0 0 0 0 0 0
Number of non-zero row is 1. Hence rank is 1.

µ −1 0
Example: The rank of [ 0 µ −1] is 2, value of µ =?
−1 0 µ
(A) 3 (B) 2 (C) 1 (D) 0

Solution: Option (C) is correct (Hint: Just by equating the determinant to zero)

info@thegateacademy.com ©Copyright reserved. Web:www.thegateacademy.com 10


Linear Algebra

Vector Space
It is a set V of non-empty vectors such that with any two vectors “a and b” in V, all their linear
combinations αa + βb (α, β are real numbers) are elements of V.

Dimension: The maximum number of linearly independent vectors in V is called the dimension of V
(and denoted as dim V).

Basis: A linearly independent set in V consisting of a maximum possible number of vectors in V is


called a basis for V. Thus the number of vectors of a basis for V is equal to dim V.

Span: The set of all linear combinations of given vector a(1) , a(2), ……………………a(p) with same
number of components is called the span of these vectors. Obviously, a span is a vector space.

Solution of Linear System of Equation


For the following system of equations
a11 x1 + a12 x2 + - - - a1n xn = k1
a21 x1 + a22 x2 + - - - a2n xn = k 2
- - - - - - -
- - - - - - -
am1 x1 + am2 x2 + - - - amn xn = k m
In matrix form, it can be written as AX = B
Where,
a11 a12 − − − a1n x1 k1
a21 − − − − a2n x2 k2
− − − − − − − −
A= − − − − − − , X = − ,B = −
− − − − − − − −
[ am1 am2 − − − amn ] [ xn ] [ km]
A = Coefficient Matrix
C = (A : B) = Augmented Matrix
Definition: The set of values x = {x1 … xn } which the satisfies equality simultaneously is called
solution.

Meaning of Consistency, Inconsistency of Linear Equation


If the above mentioned linear equations are consistent, they have either one or more than one
solution
(A) Consistent →
a) Unique Solution
Solution
x+2y = 4
3x +2y = 2

b) Infinite Solution Overlap


x+2y = 4
3x +6y = 12

info@thegateacademy.com ©Copyright reserved. Web:www.thegateacademy.com 11


Linear Algebra

(B) Inconsistent → No Solution 4


x+2y = 4 Parallel
2
x +2y = 8

4 8

Consistency of a System of Equations


Let A X = B be the linear system of equations such that Rank of A be r and rank of [A’ B] be r’ and ‘n’
be number of unknowns.
For Non-Homogenous Equations (A X = B)
i) If r ≠ r ′ the equations are inconsistent i.e., there is no solution.
ii) If r = r ′ = n, the equations are consistent and there is a unique solution.
iii) If r = r ′ < n, the equations are consistent and there are infinite number of solutions.

For Homogenous Equations (A X = 0)


i) If r =n, the equations have only a trivial zero solution (i.e., x1 = x2 = - - - xn = 0).
ii) If r<n, then equation have non-trivial solution (more than one solution and number of linearly
independent solutions is given by ‘n − r’).

Cramer’s Rule
Let the following two equations be there
a11 x1 + a12 x2 = b1 --------------------------------------- (i)
a21 x1 + a22 x2 = b2 -------------------------------------- (ii)
a11 a12
D = |b |
21 b22
b a12
D1 = | 1 |
b2 a22
a b1
D2 = | 11 |
a21 b2
Solution using Cramer’s rule:
D1 D2
x1 = and x2 =
D D
In the above method, it is assumed that
1. No. of equation = No. of unknown
2. D≠ 0

In General, for Non-Homogenous Equations


D≠ 0 → Single solution (unique solutions)
D = 0 → Infinite solution

For Homogenous Equations


D≠ 0 → Trivial solutions ( x1 = x2 =………………………xn = 0)
D = 0 → Non- trivial solution (or infinite solution)

info@thegateacademy.com ©Copyright reserved. Web:www.thegateacademy.com 12


Linear Algebra

Example: Solve the following Simultaneous Equation:


x1 + 2x2 − x3 = 1
3x1 − 2x2 + 2x3 = 2
7x1 − 2x2 + 3x3 = 5
1 2 −1 x1 1
Solution: [3 −2 2 ] [x2 ] = [ 2 ]
7 −2 3 x3 5
AX = B
C = [A : B]
1 2 −1: 1
[3 −2 2 : 2]
7 −2 3 : 5
R 2 − 3R1 ,
R 3 − (R1 + 2R 2 )
1 2 −1: 1
[0 −8 5 : −1] 
0 0 0: 0
x1 + 2x2 − x3 = 1 ----------- (i)
−8x2 + 5x3 = −1------------------ (ii)
Assume, x3 = k---------------- (iii)
−1
From (ii), x2 = (5k + 1)
8
1 1
From (i), x1 = 1 + (5k + 1) + k = (9k + 5)
4 4
Has infinite solution (For every of k, there will be a solution set)

Example: For the given simultaneous equation, written in matrix form


1 1 1 x 6
[1 2 3] [ ] = [ y 10 ], determine the value of  & µ for the following cases:
1 2  z µ
(A) No solution (B) Unique solution (C) Infinite solution
1 1 1 x 6
Solution: [1 2 3] [ y ] = [ 10 ]
1 2  z µ
AX=B
1 1 1 : 6
C = (A, B) = [1 2 3 : 10]
1 2  : µ
{Trick: Try to bring maximum zeros in last row through elementary transformation}
R 2 → R 2 − R1 , R 3 → R 3 − R1
1 1 1 6
[0 1 2 4 ]
0 1 −1 µ−6
R 3 → R 3 −R 2
1 1 1 6
[0 1 2 4 ]
0 0  − 3 µ − 10
i) For No Solution: R(A) ≠ R (C) i.e,  −3 = 0, but µ −10 ≠ 0
 = 3, µ ≠ 10
info@thegateacademy.com ©Copyright reserved. Web:www.thegateacademy.com 13
Linear Algebra

ii) For Unique Solution: R (A) = R (C) = 3


 −3 ≠ 0, µ may be anything
 ≠ 3, µ may be anything
iii) Infinite Solution: R (A) = R (C) = 2
 − 3 = 0, µ −10 = 0
 = 3, µ = 10

Eigenvalues
If A is a matrix, equation formed by ‘determinant of (A − λ I) equated to zero’ is called
characteristic equation.
The roots of this equation are called the characteristic roots / latent roots / Eigen values of the
matrix A.

Properties of Eigenvalues
1. The sum of the Eigen values of a matrix is equal to the sum of its principal diagonal elements.
2. The product of the Eigen values of a matrix is equal to its determinant.
3. The largest Eigen values of a matrix is always greater than or equal to any of the diagonal
elements of the matrix.
4. If 𝜆 is an Eigen value of orthogonal matrix, then 1/ 𝜆 is also its Eigen value.
5. If A is real, then its Eigen value is either real or complex conjugate pair.
6. Matrix A and its transpose AT has same characteristic root (Eigen values).
7. The Eigen values of triangular matrix are just the diagonal elements of the matrix.
8. Zero is the Eigen value of the matrix if and only if the matrix is singular.
9. Eigen values of a unitary matrix or orthogonal matrix has absolute value ‘1’.
10. Eigen values of Hermitian or symmetric matrix are purely real.
11. Eigen values of skew Hermitian or skew symmetric matrix is zero or pure imaginary.
|A|
12. is an Eigen value of adj A (because adj A = |A|A−1 ).
λ
13. If λ is an Eigen value of the matrix then ,
i) Eigenvalue of A−1 is 1/λ
ii) Eigenvalue of Am is λm
iii) Eigenvalue of kA are kλ (k is scalar)
iv) Eigenvalue of A + k I are λ + k
v) Eigenvalue of (A − k I)2 are (−k)2

Eigenvectors
[A − λ I ] X = 0, where x is non – zero vector
For each Eigenvalue λ, solving for X gives the Eigenvectors. Clearly, zero vector X = 0 is one of the
solutions. (But it is of no practical interest).
Note:
1. The set of Eigenvalues is called SPECTRUM of A.
2. The largest of the absolute value of Eigenvalues is called spectral radius of A.
3. Multiplying the Eigenvector X by a scalar (λ) and matrix (A) gives the same result.
4. For a given Eigenvalue, there can be different Eigenvectors, but for same Eigenvector, there
can’t be different Eigen values.
info@thegateacademy.com ©Copyright reserved. Web:www.thegateacademy.com 14
Linear Algebra

Properties of Eigenvectors
1. Eigenvector X of matrix A is not unique.
2. Let X i is Eigenvector, then cX i is also Eigen vector (c = scalar constant).
3. If λ1 , λ2 , λ3 . . . . . λn are distinct, then X1 , X 2. . . . . X n are linearly independent .
4. If two or more Eigen values are equal, it may or may not be possible to get linearly independent
Eigenvectors corresponding to equal roots.
5. Two Eigenvectors are called orthogonal vectors if X1T∙ X 2 = 0. (X1 , X 2 are column vectors)
(Note: For a single vector to be orthogonal , AT = A−1 or, A. AT = A. A−1 =  )
6. Eigenvectors of a symmetric matrix corresponding to different Eigenvalues are orthogonal.

5 4
Example: Find the Eigenvalue and the Eigen vector of A = [ ]
1 2
Solution: |A − λ Ι | = 0
5−λ 4
| |=0
1 2−λ
λ2 − 7λ + 6 = 0 ⇒ λ = 6, 1
By definition of Eigen vector, [A − λΙ ] X = 0
5 − λ1 4 x1
Hence, [ ] [x ] = 0
1 2 − λ1 2
−1 4 x1
For λ = 6, [ ][ ] = 0
1 −4 x2
Which gives following two equation
−x1 + 4x2 = 0 ---------------(i)
x1 − 4x2 = 0 ---------------(ii)
However only one equation is independent
x1 x2
= giving the Eigenvector (4, 1)
4 1
4 4 x1
For λ = 1, [ ][ ]=0
1 1 x2
4x1 + 4x2 = 0
x1 + x2 = 0
x1 x2
= giving the Eigenvector (1, −1)
1 −1

Cayley Hamilton Theorem


 Every square matrix satisfies its own characteristic equation.

Linearly Dependent Vectors


 Vector: Any quantity having n components is called a vector of order n.
 If one vector can be written as linear combination of others, the vector is linearly dependent.

Linearly Independent Vectors


 If no vectors can be written as a linear combination of others, then they are linearly
independent.
Suppose the vectors are x1 , x2 , x3 , x4
Its linear combination is λ1 x1 + λ2 x2 + λ3 x3 + λ4 x4 = 0
Only when λ1 = λ2 = λ3 = λ4 = 0

info@thegateacademy.com ©Copyright reserved. Web:www.thegateacademy.com 15


Linear Algebra

 If λ1 , λ2 , λ3 , λ4 are not “all zero” → They are linearly dependent.

1 2 2
Example: If A = [2 1 2] , find the value of A2 − 4A − 5I
2 2 1
Solution: |A − λ Ι| = 0⇒ −λ3 + 9λ + 3λ2 + 5
⇒ (−λ2 + 4λ + 5) (λ+1) = 0
λ2 − 4λ − 5 =0 or λ + 1= 0
From Cayley Hamilton Theorem, A2 − 4A − 5 = 0 or A+I=0
As A ≠ −I
Hence, A2 − 4A − 5 = 0

1 2 −3
Example: For matrix A =[ 0 3 2 ], find Eigenvalues of 3A3 + 5A2 − 6A + 2 I.
0 0 −2
Solution: |A −  0|=
1− 2 −3
| 0 3− 2 |=0
0 0 −2 − 
(1 − )(3 − )(−2 − ) = 0 ⟹  = 1,  = 3,  = −2
Eigen value of A = 1, 3, −2
Eigen value of A3 = 1, 27, −8
Eigen value of A2 = 1, 9, 4
Eigen value of  = 1, 1, 1
First Eigen value of 3A3 + 5A2 − 6A + 2 I = 3 × 1 + 5 × 1 − 6 × 1 + 2 = 4
Second Eigen value of 3A3 + 5A2 − 6A + 2 I = 3 (27)+5(9)–6(3)+2=81+45–18+2 = 110
Third Eigen value of 3A3 + 5A2 − 6A + 2 I =3(−8)+5(4)–6(−2)+2= −24 + 20 + 12 + 2 = 10

a11 0 0 0
a21 a22 0 0
Example: Find Eigen values of matrix A = [a ]
31 a 32 a 33 0
a41 a42 a42 a44
a11 −  0 0 0
a21 a22 −  0 0
Solution: |A−| = | |=0
a31 a32 a33 −  0
a41 a42 a43 a44 − 
Expanding, (a11 − ) (a22 − ) (a33 − ) (a44 − ) = 0
 = a11, a22 , a33 , a44 → which are just the diagonal elements
Note: Recall the property of Eigenvalues, “The Eigen value of triangular matrix are just
the diagonal elements of the matrix”.

info@thegateacademy.com ©Copyright reserved. Web:www.thegateacademy.com 16

Potrebbero piacerti anche