Sei sulla pagina 1di 15

Math Preparatory Programme: 2015

Sessions 4 & 5: Vectors, Matrices and Linear Equations


Apratim Guha

Vectors

Definition 1 By a vector
x we
denote a list of numbers, say, for example, 1,2,3,4, arranged
1
2

either in a column x =
3 or in a row: x = (1 2 3 4). In the first case x is a column
4
vector, and the second case it is a row vector.
To save space I shall write most of my vectors as row vectors, but the rules will apply
to column vectors as well.
Some terms and properties:
1. The length of a vector is the number of terms in it. E.g. (1 2) is a vector of length 2.
2. A scalar is a vector of length 1. It is nothing but just a number, e.g. 5, 10 etc.
3. A zero vector is any vector all of whose elements are 0. The length of the vector may
vary and should be clear from the context. E.g. both (0 0 0) and (0 0 0 0) are zero
vectors.
4. Equality of Vectors: We say two vectors a and b are equal if (a) they are of the
same type (column or row) (b) of the same length and (c) elements at the same
position are equal individually.
5. Sum of Vectors: The sum of two vectors of same length is given by the item-wise
sum:
(a1 a2 a3 ) + (b1 b2 b3 ) = ((a1 + b1 ) (a2 + b2 ) (a3 + b3 )).
Note that:
(a) Only likewise vectors can be added: i.e. column vectors add to column vectors,
row vectors add to row vectors. Their lengths must also be the same.
(b) The sum is commutative: a + b = b + a.
1

2
Example 1 (1 2 3) + (3 2 1) = (3 2 1) + (1 2 3) = (4 4 4).
(c) The sum is associative: (a + b) + c = a + (b + c).
Hence we can get rid of the parentheses and just write a + b + c for the sum.
6. Scalar multiplication of vectors: When a vector x is multiplied by a scalar c, the
result is a new vector whose elements are the elements of x multiplied item-wise by
c.
Example 2 5 (1 2 3) = (5 1 5 2 5 3) = (5 10 15).
In particular,
(a) (1) x is a vector whose elements are the negative of the elements of x. We
write such a vector as x.
(b) subtraction is defined as x y = x + (y). In particular, x x is the zero
vector of length same as x, given by 0.
(c) 0 x is also the zero vector.
7. The dot product or inner product of two vectors (of same length) x and y, written
as x y is the sum of element-wise products.
Problem 1 If x = (1 2 3) and y = (1 3 5), then compute x y.
8. The norm of a vector x, denoted by kxk is defined as kxk =

x x.

Problem 2 If x = (1 2 3) and y = (1 3 5), then compute kxk and kyk. Check that
(a) 2 (x y) kxk2 + kyk2 ;
(b) kx + yk2 + kx yk2 = 2 (kxk2 + kyk2 )
(c) kx + yk2 kx yk2 = 4(x y)

3
9. The results (a)-(c) in Problem 2 are valid for any two vectors x and y of equal length.
10. Distance of Two Vectors: The distance between two vectors x and y is given by
kx yk.
Problem 3 If x = (1 0 0), y = (1 3 5) and z = (0 1 5), then compute the distances
between x and y, x and z; and y and z. Which two points are the farthest apart?
Check that their distance is less than the sum of the other two distances. This is
called the triangle property.

Linear Dependence of vectors

Definition 2 Two non-zero vectors x and y are linearly dependent if there is a non-zero
constant c such that cx = y. They are linearly independent if there is no such non-zero c.
Note: The zero vector (of length n) is considered to be linearly dependent on all other
vectors (of length n).
Problem 4 Which of the following pairs are linearly independent and which are linearly
dependent?
(a) (1 0) and (0 1);
(b) (1 2) and (3 6);
(c) (1 2 3) and (4 5 6);
(d) (2 6 10) and (5 15 25);
(e) (2 4 8) and (4 8 16);
(f ) (2 4 8) and (4 16 64).
Definition 3 Consider k (k 2) non-zero vectors x1 , x2 , , xk . These vectors are linearly independent if whenever we write c1 x1 + c2 x2 + + ck xk = 0, that must mean that
c1 = c2 = = ck = 0.
x1 , x2 , , xk are linearly dependent if we can write c1 x1 + c2 x2 + + ck xk = 0 with at
least some non-zero cj s.
Note 1: We call c1 x1 + c2 x2 + + ck xk a linear combination of x1 , x2 , , xk .
Note 2: Definition 2 is just an easier way to write Definition 3 for vectors of length 2.
Problem 5 Which of the following are linearly independent and which are linearly dependent?
(a) (1 0), (0 1) and (5 6);

4
(b) (1 0), (0 1) and (a b), at least one of a and b being non-zero;
(c) (1 2 3), (4 5 6) and (5 7 10);
(d) (2 4 8), (3 8 24) and (4 16 64);
(e) (1 0 0), (0 1 0) and (0 0 1);
(f ) (1 0 0), (0 1 0), (0 0 1) and (a b c), at least one of a, b and c being non-zero.
Note: It can be shown that for any collection of vectors of length n, there can not be more
than n linearly independent vectors. If we somehow obtain n linearly independent vectors
of length n, then any other vector of length n can be written as a linear combination of
those vectors.
Notation: The collection of all vectors of length n is denoted by Rn .
Definition 4 A collection of n linearly independent vectors of length n is called a basis of
all vectors of length n, i.e. a basis of Rn .
Note: Bases are not unique. For example, one can construct three sets of bases for R2
from the collection given in Problem 5(a) above.
These ideas will come in handy when we discuss matrices, which is up next.
Problem 6 From Problems 4 and 5, suggest (i)a basis for vectors of length 2, and (ii)a
basis for vectors of length 3.

Matrices


 

1 2
3
1 2
Definition 5 A matrix is a two-dimensional array of numbers: for example
,
3 4
e
2

0 0

and 0 0 are all matrices.


0 0
These are respectively a 2 2 matrix, a 2 3 matrix and a 3 2 matrix (read 2
by 2 etc.); the first figure refers to the number of rows and the second to the number of
columns. So row vectors like (x, y) and (x, y, z) are also matrices, respectively 1 2 and
1 3 matrices. Similarly a column vector of length 2 is a 2 1 matrix.
Some terms and properties:
1. If you are presented with an m n matrix A = (aij ) then the notation here simply
means that aij is the (i, j)-th entry. That is, the entry in the i-th row and jth column
is aij . Note that i can vary between 1 and m, and that
j canvary between 1 and n.
a1j
..
So the i-th row = (ai1 ain ) and the j-th column = . .
amj
Note 1: We sometimes write the matrix A as Amn when we need to make the
dimensions clear, but sometimes we drop the subscript when we are sure what is
going on.
Note 2: As with vectors there is a zero m n matrix, whose every entry is 0, which
we denote by 0mn .
2. Equality of Matrices: We say two matrices A and B are equal if (a) they are of
the same order, i.e. have the same number of rows and columns and (b) elements at
the same position are equal individually.
3. Transpose of a Matrix: The transpose of a m n matrix A = (aij ) is the matrix
AT = (aji ), i.e. the matrix obtained by interchanging the rows and the columns.
Note: The transpose of a column vector is a row vector, and the transpose of a
column vector is a row vector.





1 2 3
2 3 0
Problem 7 If A =
and B =
, then obtain (a)AT , (b)B T and
0 1 4
1 2 5
(c)(AT )T .
Solution:
Note: You should have noticed that the transpose of the transpose of a matrix is
the matrix itself.

6
4. Sum of Matrices: If A = (aij ) and B = (bij ) are two mn matrices, then C = A+B
is also a matrix where cij = aij + bij for 1 i m, 1 j n.
Note 1: Only matrices of same order can be added.
Note 2: Just like vector addition, matrix addition is also associative and commutative.
Note 3: Matrix addition and transpose are commutative: (A + B)T = AT + B T .
5. Scalar Multiplication: For any scalar k, k A = (kaij ), i.e. the resulting matrix
is such that each element is multiplied by k.
Note 1: We write (1) A = A. Also note that 0 A = 0 for any matrix A.
Note 2: (kA)T = kAT .





1 2 3
2 3 0
Problem 8 If A =
and B =
, then obtain (a) A + B (b)
0 1 4
1 2 5
A B,(c) 2A B and (d) (B A)T .
6. Matrix Multiplication: Consider an m n matrix A and an n p matrix B.
Then P
the product of these two matrices is an m p matrix C = AB such that
cij = nk=1 aik bkj = ai1 b1j + ai2 b2j + + ain bnj for 1 i m and 1 j p.


1 2 0
1 2 3
Example 3 Let A =
and B = 2 3 0. Then, what is C = AB?
0 1 4
5 0 1


Solution: Firstly, we note that A is a 2 3 matrix and B is a 3 3 matrix. Hence


C is also a 2 3 matrix.
Now,

c11 =

1 2 3
0 1 4

(1) 2 0
2
3 0 = 1 1 + 2 2 + 3 5 = 18.
5
0 1

Work out the other terms:


We get

C=

18

(Fill in the blanks!)


Note 1: We can (post-)multiply a matrix A by another matrix B only if the number
of columns of A is equal to the number of rows of B.

7
Note 2: Non-commutativity of Matrix Multiplications: Not only that AB 6=
BA, sometimes AB will be defined but BA will not be defined. E.g. in Example 3
we obtained AB but BA is not defined at all.
Note 3: If x is a row vector of length n and y is a column vector of length n,then

3
xy is the sum of element-wise products. For example, if x = (1 2) and y =
,
4
then xy = 1 3 + 2 4 = 11.
Note 4: In particular, xxT = x x = kxk2 .
Note 5: If AB exists, then (AB)T = B T AT .
Problem 9 Obtain AB and/or BA when possible, and verify that (AB)T = B T AT .
If BA exists, what is (BA)T ?




3 2
1 0
(a) A =
and B =
1 0
0 1



1 2
1 2 3
(b) A =
and B = 2 3
0 1 4
5 0



1 2
1 2
(c) A =
and B = 2 3
0 1
5 0




1 2 3
3 3 1
(d) A =
and B =
0 1 4
2 0 0
We have already mentioned that rows and columns of matrices are vectors. Hence
we can try to find the number of linearly independent rows or columns of a matrix.
These quantities actually have a name.
7. Row Rank and Column Rank of a Matrix: The row rank of a matrix is the
number of linearly independent rows in the matrix. Similarly, the column rank of a
matrix is the number of linearly independent columns in the matrix.
Note: We say that an m n matrix is of full row rank if all its rows are linearly
independent, i.e. row rank = m.
Similarly, we say that an m n matrix is of full column rank if all its columns are
linearly independent, i.e. column rank = n.


1 2 3
Problem 10 Find the row rank and the column rank of the matrix A =
.
0 1 4
Is it of (a)full row rank or (b) full column rank?

8
8. Rank of a matrix: For any matrix, row rank and column rank are equal, and that
common value is known as the rank of that matrix.
Problem 11 Compute the rank of the following matrices and check if they are of full
row rank or full column rank:


3 3 1
(a)
:
2 0 0

1 2
(b) 2 3:
5 0

1 2 0
(c) 2 4 0:
5 0 1

Square Matrices

Definition: An m n matrix is a square matrix if m = n. We call an n n matrix a


square matrix of order n. The diagonal of a square matrix order n are all the entries aii ,
i = 1, 2, 3,
, n.
1 2 0

E.g. 2 4 0 is a square matrix of order 3. The diagonal elements are 1, 4 and 1.


5 0 1
1. Symmetric Matrix: A square matrix A is said to be symmetric if AT = A, i.e.
aij = aji for all i and j.

1 2 5
1 2 5
0 is symmetric, but the matrix 2
4 0
Example 4 The matrix 2 4
5 0
1
5 0 1
is not.
Note: For any m n matrix A, AAT is a symmetric matrix of order m and AT A is
a symmetric matrix of order n.
2. Diagonal Matrix: A square matrix of order n is called a diagonal

1 0

if all the off-diagonal elements are zero. E.g. The matrix 0 4


0 0
matrix.

matrix
of order n

0
0 is a diagonal
1

9
3. Identity Matrix: An identity matrix of order n is a diagonal matrix of order n
where all the diagonal elements are equal to 1. We denote the identity matrix of
order n by In .
An identity matrix is named so because when a matrix is pre-multiplied or postmultiplied by a compatible identity matrix, we get back the same matrix.

1 0 0 1 2 5
1 2 5
1 2 5 1 0 0
0 = 2 4
0 = 2 4
0 0 1 0;
Example 5 (a) 0 1 0 2 4
0 0 1
5
0
1
5
0
1
5
0
1
0 0 1



 
 
 1 0 0
1 0 1 2 5
1 2 5
1 2 5
0 1 0.
(b)
=
=
0 1 2 4
0
2 4
0
2 4
0
0 0 1
4. Inverse of a Square Matrix: The inverse of a square matrix A of order n, when
it exists, is a matrix B of order n such that AB = In = BA.
Notation: The inverse of a matrix A is written as A1 .




2 1
2/3 1/3
Example 6 The matrix
has the inverse
;
2
4
1/6
1/3

1 2 3
3 2 0
the matrix 2 3 4 has the inverse 2 1 0.
0 0 1
1 2 1
Note 1: The inverse of a square matrix does not always exist. The matrix must
be of full rank, i.e. its rank must be equal to its order for inverse to exist. In other
words, inverse of a matrix exists only when all its columns (or rows) are linearly
independent. Such a matrix is called non-singular matrix. If the matrix is not of full
rank, then the inverse does not exist, and the matrix is called singular matrix.

2 1 0
Example 7 The matrix 2 4 0 has rank 2, and hence it does not have an inverse.
2 5 0
Note 2: The inverse of the identity matrix is itself.
Note 3: We are not ready to discuss the computation of the inverse of a matrix yet.
We shall know a method after we discuss determinants.

3.1

Determinant of a Square Matrix

For every square matrix A, a quantity called the determinant of the matrix can be
computed. It is denoted by det A or simply |A|.

10
The general formula for computing the determinant of a matrix is complicated and
is beyond the scope of our class. We are only going to discuss the formulas for 22 and
33 matrices. For more general formulas see http://en.wikipedia.org/wiki/Determinant.


a11 a12
5. Determinant of a 2 2 Matrix: Let us write the matrix as A =
.
a21 a22


a
a
Then its determinant is given by |A| = 11 12 = a11 a22 a12 a21 .
a21 a22

a11 a12 a13


6. Determinant of a 3 3 Matrix: Let us write the matrix as A = a21 a22 a23 .
a31 a32 a33
Then its determinant is given by


a11 a12 a13








a22 a23
a21 a23
a21 a22








|A| = a21 a22 a23 = a11
a12 a31 a33 + a13 a31 a32 .
a
a
32
33
a31 a32 a33
Problem 12 Compute the determinants of the following matrices:


1 0
(a)
:
3 1


1 2
(b)
:
3 4

3 2 0
(c) 2 1 0:
1 2 1

3 0 0
(d) 2 1 0:
1 2 1

1 0
0
6 :
(e) 2 4
1 2 3
7. Some Properties of Determinants:
(a) |AT | = |A|.
(b) |AB| = |A| |B| = |B| |A| = |BA|.
(c) If A is a square matrix of order n and k is a constant, then |kA| = k n |A|.
(d) If A is a singular matrix, |A| = 0.
(e) If A is a non-singular matrix, then |A1 | = 1/|A|.

11
(f) |A + B| =
6 |A| + |B|.
(g) |In | = 1.
8. Co-factors: For a square matrix A, the co-factor of aij , denoted by Aij , is defined
as the determinant of the matrix A after deleting the i-th row and the j-th column,
multiplied by 1 if (i + j) is even, and (1) if (i + j) is odd.


a11 a12
For example, for a 2 2 matrix A =
, the cofactor A11 of a11 is a22 , the
a21 a22
cofactor A12 of a12 is a21 and so on.
So, we can write |A| = a11 A11 + a12 A12 .



a11 a12 a13
a22 a23

,
Similarly, for a 33 matrix A = a21 a22 a23 , the cofactor A11 of a11 is
a32 a33
a31 a32 a33

a21 a23
and so on.
the cofactor A12 of a12 is
a31 a33
So, we can write |A| = a11 A11 + a12 A12 + a13 A13 . This formula can be generalized to
a matrix of any order.
Problem 13 Compute the determinants of the 3 3 matrices in Problem 12 using
co-factors.
9. Obtaining the Inverse of a Square Matrix: The inverse of a square matrix A is
+
given by A1 = A|A| , where A+ = (Aij ), is the matrix of the co-factors of A.
Problem 14 Verify that the inverses stated in Example 6 are correct.
Problem 15 Where possible, obtain the inverses of the matrices given in Problem 12.

3.2

Eigenvalues and Eigenvectors of Square Matrices

Definition 6 A non-zero vector x is an eigenvector (or characteristic vector) of a square


matrix A if there exists a scalar such that Ax = x. Then is an eigenvalue (or
characteristic value) of A.
Note: The zero vector can not be an eigenvector even though A0 = 0. But = 0 can
be an eigenvalue.
 


2
2 4
Example 8 Show that x =
is an eigenvector of the matrix A =
. Find a
1
3 6
corresponding eigenvalue.

12


   
0
2 4 2
Solution: We see that Ax =
=
.
3
6
1
0
 
0
Now, for = 0, x =
.
0
Hence x is an eigenvector of A with a corresponding eigenvalue 0.
Geometric Interpretation of eigenvector and eigenvalue: An n n matrix A
multiplied by n 1 vector x results in another n 1 vector y = Ax. Thus A can be
considered as a transformation matrix.
In general, a matrix acts on a vector by changing both its magnitude and its direction.
However, a matrix may act on certain vectors by changing only their magnitude, and leaving
their direction unchanged (or possibly reversing it). These vectors are the eigenvectors of
the matrix.
A matrix acts on an eigenvector by multiplying its magnitude by a factor, which is
positive if its direction is unchanged and negative if its direction is reversed. This factor is
the eigenvalue associated with that eigenvector.
Solving for eigenvalues: Let x be an eigenvector of the matrix A. Then there must
exist an eigenvalue such that Ax = x or, equivalently,
Ax x = 0, or
(A I)x = 0.
If we define a new matrix B = A I, then
Bx = 0
If B has an inverse then x = B 1 0 = 0. But an eigenvector cannot be zero. Thus, it follows
that x will be an eigenvector of A if and only if B does not have an inverse, or equivalently
|B| = 0, or
|A I| = 0
This is called the characteristic equation of A. Its roots determine the eigenvalues of A.


2 12
Example 9 Find the eigenvalues of A =
.
1 5
Solution:



2 12
|I A| =
= ( 2)( + 5) + 12 = 2 + 3 + 2 = ( + 1)( + 2).
1 + 5
Hence the matrix have two eigenvalues: -1 and -2.
Note 1: A matrix of order n, clearly, has exactly n eigenvalues.
Note 2: The roots of the characteristic equation can be repeated. That is, 1 = 2 =
= k is possible, where k n, the order of the square matrix. If that happens, the
eigenvalue is said to be of multiplicity k.

13

2 1 0
Problem 16 Check that the matrix A = 0 2 0 has only one distinct eigenvalue of
0 0 2
multiplicity 3.
Eigenvectors: To each distinct eigenvalue of a matrix A there will correspond at least
one eigenvector which can be found by solving the appropriate set of equations. If is an
eigenvalue then the corresponding eigenvector x is the solution of (A I)x = 0.


2 12
Example 10 For A =
, from example 9, consider the eigenvalue = 1. Notice
1 5

3 12
that (A I) = (A + I) =
.
1 4

 
3 12 x1
So we need to solve
= 0,
1 4 x2
or 3x1 12x2 = 0; x1 4x2 = 0.
Notice that this system
 of
 equations
  has infinitely many solutions (well explain that
x1
4t
again later) of the form
=
, where t 6= 0. So one eigenvector corresponding to
x2
t
 
4
= 1 is
.
1
 
3
Similarly, an eigenvector corresponding to = 2 is
.
1

2 1 0
Problem 17 Obtain an eigenvector of the matrix A = 0 2 0.
0 0 2
Some Properties:
1. The sum of the eigenvalues of a square matrix equals the sum of the elements on the
main diagonal. This sum is called the the trace of the matrix.
2. A square matrix is singular if and only if it has a zero eigenvalue.
3. The eigenvalues of an upper (or lower) triangular matrix are the elements on the
main diagonal. Same is true for a diagonal matrix.
4. If is an eigenvalue of A and A is invertible, then 1/ is an eigenvalue of matrix A1 .
5. If is an eigenvalue of A then k is an eigenvalue of kA where k is any arbitrary
non-zero number.
6. If is an eigenvalue of A then k is an eigenvalue of Ak for any positive integer k.
7. If is an eigenvalue of A then is also an eigenvalue of AT .

14
8. The product of the eigenvalues (counting multiplicity) of a matrix equals the determinant of the matrix.
Problem 18 Obtain the eigenvalues and corresponding eigenvectors of the matrices from
Problem 12. Verify, in each case, that the product of the eigenvalues is indeed the determinant.

Solving Systems of Linear Equations

Suppose we are given a system of two equations to solve:


a11 x + a12 y = b1
a21 x + a22 y = b2

(1)

Notice that we can write (1) as Ax = b, where




 
 
a11 a12
x
b
A=
;x =
and b = 1 .
a21 a22
y
b2
Then the solution of (1) is given by
x = A1 b,
provided A is non-singular.
If A is singular, then the system either has no solutions (i.e. is inconsistent) or may
have infinitely many solutions.
The system has infinitely many solutions if A is singular and b is a linear combination
of the columns of A.
If A is singular but b is not a linear combination of the columns of A, then the system
of equations do not have any solution (i.e. is inconsistent).
This is a general result that applies to any system of equations.
Problem 19 Where possible, solve the system of equations given below. If they can not be
solved, determine whether they are inconsistent or not.
(a) 3x + 2y = 7
x + 2y = 3

(b) 3x + 2y = 7
6x + 4y = 14

15

(c) x + y + z = 7
y + z = 10
z=2

(d) 3x 2y + z = 7
4x + 5y 3z = 10
11x + 8y 5z = 25

(e) 3x + 4y + 11z = 0
2x + 5y + 8z = 0
x 3y 5z = 0

(f ) 3x + y + 2z = 3
2x 3y z = 3
x + 2y + z = 4

4.1

Using Cramers Rule to Solve Systems of Equations

Consider a system of n equations Ax = b as before. Assume that A is non-singular. Then


the solution of the system can be obtained by
xi =

|Ai |
, i = 1, 2, , n,
|A|

where Ai is the matrix formed by replacing the i-th column of A by the column vector b.
Problem 20 Where possible, solve the system of equations given in Problem 19 using
Cramers rule.

Potrebbero piacerti anche