Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
MATH2601
Comments.
The elements of V are usually referred to as vectors and the field
elements as scalars. The properties are called the vector space
axioms.
The first four axioms show that (V, +) is an abelian group. Hence all
the results we know about groups apply to the addition of vectors.
for all x X;
Note carefully that the field contains a zero element 0, which is not
the same as the vector 0.
Question. Why did I put quotation marks around the word associativity in axiom 5?
It is not necessary that the set X be a vector space. (Why not?) Exercise. What are we really talking about if X = { 1, 2, . . . , n }?
What if X = { 1, . . . , m } { 1, . . . , n }?
Note that, strictly speaking, a polynomial is not a function. (However, a polynomial function is a function. Confused yet?)
2 = x + y 2 x, y Q ,
7. In the same way as we emphasized in discussing groups, the operations in a vector space need not be conventional addition and
multiplication: as long as the axioms are satisfied, its a vector
space. For example, take V = R+ , the set of positive real numbers,
and define addition and scalar multiplication by
v w = vw
and v = v .
3. (1)v = v;
4. if v = w and 6= 0, then v = w;
5. if v = 0 then either = 0 or v = 0.
Proof of the third result. By axiom 6, axiom 8, a field calculation and
part 1 of this lemma,
v + (1)v = 1v + (1)v = (1 + (1))v = 0v = 0 .
By uniqueness of inverses in the group (V, +) we have (1)v = v.
Definition. Let V be a vector space and W a subset of V . If W is a
vector space with the same scalar field and the same operations as V ,
then W is said to be a subspace of V .
Lemma. Subspace lemma. Let V be a vector space over the field F,
and let W be a subset of V . Then W is a subspace of V if and only if
the following conditions hold.
1. W is not empty.
2. Closure under addition. For all v, w W we have v + w W .
Proof. Exercise. In particular, make sure you can explain clearly why
W satisfies axioms 3 and 4.
Comments.
An equivalent alternative for condition 1 is that W contain the zero
vector.
An equivalent alternative for the second and third conditions: if
v, w W , F then v + w W .
Examples.
2. 0 = 0;
3
Examples.
1. The span of two nonzero vectors in R3 is a plane through the origin,
unless the vectors are scalar multiples of each other, in which case
it is a line through the origin.
2. For any n N we have span{ 1, t, t2 , . . . , tn } = Pn .
3. Any field can be considered as a vector space over itself. Consider
the vector space Q of rational numbers over the field Q, and the set
n
o
1
1
1
S = 1,
,
,
,... Q .
10 100 1000
We have
1
1
1
1
= 3(1) + 1
+4
+1
+5
+ ;
10
100
1000
10000
but is not an element of Q. How can we reconcile this with the
preceding lemma?
Definition. A subset S of a vector space V is linearly independent
if for all vectors v1 , v2 , . . . , vn in S (with n 1) the equation
1 v1 + 2 v2 + + n vn = 0
has a unique solution for the scalars 1 , 2 , . . . , n in F. A set which is
not linearly independent is said to be linearly dependent.
Note that in the definition of linear independence, once again we are
considering specifically finite linear combinations.
Example. It is easy to see that the polynomials
p1 = 1 + 2t + 3t2 ,
p2 = 4 + 5t + 6t2 ,
p3 = 14 + 25t + 36t2
p2 = 4 + 5t + 6t2 ,
p3 = 1 + 2t2 .
where the right hand side is the zero polynomial. Substituting the given
polynomials,
1 (1 + 2t + 3t2 ) + 2 (4 + 5t + 6t2 ) + 3 (1 + 2t2 ) = 0 + 0t + 0t2 ;
21 + 52 = 0 ,
31 + 62 + 23 = 0 .
1 4 1
0
1 4 1
2 5 0
0 0 3 2
3 6 2
0 0
1
0
0
0 .
0
()
As the left hand side of the echelon form has no nonleading columns,
the system has a unique solution. Therefore p1 , p2 , p3 are linearly independent. Comment. You are probably used to putting the polynomial
coefficients directly into a matrix and starting at () for this kind of
problem. Thats fine, as long as you understand the background and
know where the matrix comes from. In particular, make sure you understand why the coefficients of each polynomial must always become a
column, and not a row, of the matrix.
Basis, dimension and coordinates.
Definition. A basis for a vector space V is a linearly independent subset
of V which spans V .
Examples. The idea is that we are looking for a minimal set of basic
vectors from which we can form, by means of linear combinations, all
vectors in V . (The word basic here is not a joke, it is the reason
for the term basis!) The following examples are intended to give you
some intuitive feel for how you can find a basis of a given vector space;
you should be able also to give a formal proof that each basis is actually
correct.
1. To obtain all vectors in Rn in terms of addition and (real) scalar
multiplication, we need to be able to specify independently all the
unknowns in the vector
v = (a1 , a2 , . . . , an ) .
7
and it seems clear that fewer than this many vectors will not suffice.
Therefore B seems to be a basis for Rn .
2. Exactly the same basis works for Cn as a vector space over C. But
what if we want Cn as a vector space over R?
3. Consider
V = { (x1 , x2 , . . . , xn ) Rn | x1 + x2 + + xn = 0 } ,
a subspace of Rn . In this case the vector v = (a1 , a2 , . . . , an ) in V
is completely fixed once we choose n 1 of the values a1 , a2 , . . . , an ,
say for example the first n 1. We can write
v = a1 (1, 0, . . .) + a2 (0, 1, . . .) + ;
remembering that the vectors we pick must be elements of V , a
natural choice is
v = a1 (1, 0, . . . , 0, 1) + a2 (0, 1, . . . , 0, 1) +
+ an1 (0, 0, . . . , 1, 1) .
The n 1 vectors on the right hand side form a basis for V .
4. Any 3 3 real symmetric matrix can be written as a linear combination
a b c
1 0 0
0 1 0
b d e = a0 0 0 + b1 0 0 +
c e f
0 0 0
0 0 0
0 0 0
0 0 0
+ e0 0 1 + f 0 0 0 .
0 1 0
0 0 1
8
1
B = 0
0 0
0 1 0
0 0 1
0 0,1 0 0,0 0 0,
0 0
0 0 0
1 0 0
0 0 0
0 0 0
0 0 0
0 1 0,0 0 1,0 0 0 .
0 0 0
0 1 0
0 0 1
5. Find a basis for X M2,2 X 12 = 00 .
6. Find a basis for V = { p P3 | p(5) = 0 }.
7. Find bases for Q 2 , and for Q 3 2 , over Q.
8. It may be much harder to find bases for more complicated vector
spaces; indeed, for a space such as
C[0, 1] = { continuous functions f : [0, 1] R }
it is more or less impossible to write down any specific basis. The
assertion that every vector space has a basis is equivalent to the
Axiom of Choice.
How can we find a basis for a vector space if our intuition does not help?
Essentially, there are two options: start big (begin with a spanning
set, then remove vectors one by one until we have a set which still
spans the space but is also independent), or start small (begin with
an independent set, then add vectors one by one until we have a set which
is still independent but also spans the space). Both of these procedures
will run into difficulties if infinitely many vectors are needed to span the
space, so we exclude this possibility in the following theorem.
Theorem. Let V be a vector space over F, and suppose that V has a
finite spanning set.
1. If S is a finite spanning set for V , then S contains a basis for V .
2. If T is a linearly independent subset of V , then T can be extended
to a basis of V : that is, there is a basis of V which contains T .
3. Any two bases of V have the same number of elements.
9
Proof: exercise.
Proof. Suppose that |S0 | = n. We shall first prove that the lemma is
true when T = { u1 , u2 , . . . , um } is a finite independent set and m n;
we shall then show that T cannot have more than n elements (and in
particular, T cannot be infinite).
The proof will be by induction on m. The case m = 0 is clear, since we
can choose S = S0 .
For the inductive step, suppose that the result is true for some specific
nonnegative integer m < n, and consider a linearly independent set
T = { u1 , u2 , . . . , um+1 }. Since { u1 , u2 , . . . , um } is also independent,
V has a spanning set
S1 = { u1 , u2 , . . . , um , vm+1 , . . . , vn } ,
and it follows that
S2 = { u1 , u2 , . . . , um , um+1 , vm+1 , . . . , vn }
10
{ u1 , u2 } , . . . ,
{ u1 , u2 , . . . , um , um+1 } ,
{ u1 , u2 , . . . , um , um+1 , vm+1 } , . . . ,
{ u1 , u2 , . . . , um , um+1 , vm+1 , . . . , vm+k } , . . . ,
{ u1 , u2 , . . . , um , um+1 , vm+1 , . . . , vn } ,
span(S2 { v }) = span(S2 ) = V .
Lemma. Let V be a vector space of (finite) dimension n. In V , an independent set containing n vectors is a basis; and a spanning set containing
n vectors is a basis.
Proof of the first claim. Let B be a basis for V and let T be an
independent set containing n vectors. By definition, B is a spanning
set, and by assumption it is finite; so by the Exchange Lemma, there is
a spanning set S such that T S and |S| = |B| = n. Since T and S
have the same number of elements, T = S: thus T is independent, it is
a spanning set and it is a basis.
Let B = { b1 , b2 , . . . , bn } be a basis for a (finitedimensional) vector space V , and let v V . Since B spans V , there exist scalars
x1 , x2 , . . . , xn such that
v = x1 b1 + x2 b2 + + xn bn ;
since B is independent, these scalars are uniquely determined by v and
B. This prompts the following definition.
Definition. Let B = { b1 , b2 , . . . , bn } be an ordered basis for a
vector space V over a field F, and let v V . The coordinate vector
of v with respect to B is the vector (x1 , x2 , . . . , xn ) Fn such that
v = x1 b1 + x2 b2 + + xn bn .
The coordinate vector of v with respect to B will often be denoted [v]B .
Comments and examples.
1. If B and v are given, finding [v]B is just a matter of solving linear
equations. For instance, find the coordinates of (9, 1, 5) with respect
to the ordered basis
B = { (1, 2, 1), (2, 5, 3), (5, 7, 3) }
13
As usual, we
1
2
1
2
5
1 2
5
9
9
1 0 1 3
19 . ()
5 7
5
5
3 3
0
0 1
Examples.
1. In R3 , the sum of two different lines through the origin will be a
plane through the origin. Moreover, the lines will intersect only at
the origin, so it is a direct sum:
span{ v } span{ w } = span{ v, w } .
If we have a line and a plane through the origin in R3 , where the
line is not contained in the plane, then any vector in R3 can be
found in a unique way by adding a vector from the line and the
plane: symbolically,
L P = R3 .
Two different planes through the origin in R3 must intersect in a
line, so we do not have a direct sum. It is geometrically clear that
we can get any vector in R3 by adding a vector from each plane, so
P1 + P2 = R3 .
2. For any positive integer n, the vector space Mn,n of n n matrices
is the direct sum of its subspaces { symmetric n n matrices } and
{ skewsymmetric n n matrices }.
3. In M2,2 (R), describe the sum and the intersection of the subspaces
W1 =
a
0
b
0
a
a, b R and W2 =
b
16
0
0
a, b R .
4. Let n be a nonnegative integer and let k, m be nonnegative integers not exceeding n. Consider the vector space Pn , and write
Lk = { a0 + a1 t + + an tn | ak+1 = ak+2 = = an = 0 } ,
Um = { a0 + a1 t + + an tn | a0 = a1 = = am1 = 0 } .
w = (1 a1 + + 1 b1 + ) + (1 a1 + + 1 c1 + )
= (1 + 1 )a1 + + 1 b1 + + 1 c1 +
and Um = span{ t2 , t3 , t4 , t5 , t6 , t7 } .
()
Y 3 = 0
0
4
and
im(T ) = { T (v) | v V } .
Examples.
1. For any matrix A Mm,n (F), the function
T : Fn Fm
where T (x) = Ax
or
or
T (f )(x) = f (x)
Z
T (f )(x) =
f (x) cos x dx
T (f )(x) = f (x ) .
In each case we have to take some care in choosing the domain and
codomain of T . For example, not all functions f are differentiable;
and even for those which are, f need not be differentiable.
20
Hence
Z
p P2
p(t) dt = 0
1
2
= { a + bt + ct | 2a + 23 c = 0 }
= { a + bt 3at2 | a, b R }
1. T (0) = 0;
= span{ t, 1 3t }
The fact that ker T is the span of something (anything!) shows that
ker T is a subspace of V : we shall show later that this is always true.
It is easy to see (how?) that the two polynomials in the spanning
set are independent and therefore form a basis for ker T . To find
the image of T we note that any real number x is in the image,
because x = T (p) for p = 21 x + 0t + 0t2 ; therefore im(T ) = R.
2
and
w = w1 b1 + + wn bn ,
and so
v + w = (v1 + w1 )b1 + + (vn + wn )bn .
21
[v + w]B = (v1 + w1 , . . . , vn + wn )
= (v1 , . . . , vn ) + (w1 , . . . , wn )
= [v]B + [w]B ,
3. ker T is a subspace of V ;
4. if U is a subspace of V , then the set T (U ),defined by
T (U ) = { T (u) | u U } ,
is a subspace of W ; in particular, im(T ) is a subspace of W ;
Thus T (U ) is closed under scalar multiplication. Hence T (U ) is a subspace of W . Now if U has a finite spanning set { u1 , u2 , . . . , un }, then
every vector w in T (U ) can be written
w = T (u) = T (1 u1 + 2 u2 + + n un )
= 1 T (u1 ) + 2 T (u2 ) + + n T (un )
()
because each T (uk ) is zero; since the wk are independent, each k is zero.
Substituting back into () and recalling that the uk are also independent
shows that each k is also zero. Hence B is a linearly independent set.
1 w1 + + r wr = 0
v (1 v1 + + r vr ) .
We have
T (v (1 v1 + + r vr )) = T (v) (1 w1 + + r wr ) = 0 ;
so the above vector is in the kernel of T ; so
v (1 v1 + + r vr ) = 1 u1 + + n un
for some scalars k . Therefore
v = 1 u1 + + n un + 1 v1 + + r vr span(B) ;
this shows that B is a spanning set and hence also a basis for V . Finally,
B contains r + n vectors; so
dim V = r + n = rank(T ) + nullity(T )
as required.
* It is probably clear. But try to prove it carefully!
24
Examples.
1. Another look at example 3 on page 21. Having found a basis for
ker T as in the previous working, we can say that
rank T = dim P2 nullity T = 3 2 = 1 ;
so im T is a 1dimensional subspace of R; so im T = R.
2. Consider the mapping T : R3 R3 which sends a vector to its
projection onto a certain plane P through the origin. It is geometrically clear that the image of T is P and the kernel of T is the line
through the origin perpendicular to P . So we have rank T = 2 and
nullity T = 1, in accordance with the RankNullity Theorem.
Let A be an m n matrix over the field F, and consider the linear
transformation T : Fn Fm given by T (x) = Ax. The kernel, image,
nullity and rank of A are by definition the same as those of T . Observe
that if A has columns c1 , . . . , cn then
im(A) = { Ax | x Fn }
= { x1 c1 + + xn cn | x1 , . . . , xn F }
= span{ c1 , . . . , cn }
= span{ columns of A } .
This space is called the column space of A; we have just proved that
CS(A) = im(A). Similarly, the row space of A, meaning the span of
the rows of A, satisfies
RS(A) = CS(AT ) = im(AT ) .
It is an immediate corollary of the RankNullity Theorem that
rank(A) + nullity(A) = n ,
7
0
1 0
1 4
0 1
0 0
0 0
a
0 0
b
2 0 .
c
0 3
d
0 0
2 0 .
0 3
Matrices of linear transformations. There is a very close and important relationship between linear transformations and matrix multiplication. For example, consider the linear mapping T : R3 R2 defined
by
T (x1 , x2 , x3 ) = (7x1 + x2 , x2 + 4x3 ) .
25
26
=
c d
3c 3d
b d
3c b
or in terms of coordinates
a
2a
2 0
b 3b c 0 3
T =
=
c
3c b
0 1
d
2d
0 0
0
1
3
0
3b c
2d
0
a
0 b
.
0
c
2
d
2 0
0 0
0 3 1 0
.
0 1 3 0
0 0
0 2
There is no particular need to do the preceding problems using standard
bases (though in many cases, but not all, to do so will simplify the
working). For example, lets consider again the differentiation map from
P3 to P2 . Well continue to use the standard ordered basis S = { 1, t, t2 }
for P2 , but shall use a different ordered basis
B = { 3 + t, 5t2 2t3 , 4 7t + t3 , 1 + t2 }
for P3 . (Exercise. Check that this truly is a basis!) Writing the differentiation mapping in terms of these bases we have
T a(3 + t) + b(5t2 2t3 ) + c(4 7t + t3 ) + d(1 + t2 )
= T (3a 4c d) + (a 7c)t + (5b + d)t2 + (2b + c)t3
= (a 7c) + (10b + 2d)t + (6b + 3c)t2 ;
a
0 7 0
b
10
0 2 .
c
6 3 0
d
we need to write the right hand side in terms of the basis C. Setting
(a 7c) + (10b + 2d)t + (6b + 3c)t2
= 1 (1 4t + 3t2 ) + 2 (2t t2 ) + 3 (t2 )
and solving gives
1 = a 7c , 2 = 2a + 5b 14c + d , 3 = a b + 10c + d .
Now we can proceed as above to write the definition of T in terms of
coordinate vectors (this time, with respect to B in the domain and C in
the codomain) and mimic T by matrix multiplication. We have
a
a
a 7c
1
0
7 0
b
b
T = 2a + 5b 14c + d = 2
5 14 1 ,
c
c
a b + 10c + d
1 1 10 1
d
d
and therefore the matrix of T
and C in the codomain is
1
2
1
0
1 .
1
T (b1 ) ]C = A b1 B = A
... = { first column of A } .
0
In other words, to find the first column of A we take the first basis vector
for V , calculate T of this vector, and write down its coordinate vector
with respect to the basis of W . The other columns of A are then found
by a similar method.
Theorem. Matrix of a linear transformation. Let V and W be vector
spaces with bases B = { b1 , . . . , bn } and C = { c1 , . . . , cm } respectively,
and let T be a linear transformation from V to W . Let A be the m n
matrix whose jth column is the coordinate vector with respect to C of
T (bj ). Then A is the matrix of T with respect to bases B and C.
Proof. For any vector v = v1 b1 + + vn bn in V we have
v1
.
A[v]B = A .. = v1 [T (b1 )]C + + vn [T (bn )]C .
vn
But recalling (example 6, page 21) that the function which maps vectors
to coordinates is linear, we have
A[v]B = [v1 T (b1 ) + + vn T (bn )]C
Examples.
1. Reworking the differentiation map from P3 to P2 : take the basis
B = { 3 + t, 5t2 2t3 , 4 7t + t3 , 1 + t2 }
for P3 and the standard basis S = { 1, t, t2 } for P2 . Then
d
(3 + t) = 1
dt
d
T (5t2 2t3 ) =
(5t2 2t3 ) = 10t 6t2
dt
d
(4 7t + t3 ) = 7 + 3t2
T (4 7t + t3 ) =
dt
d
(1 + t2 ) = 2t ,
T (1 + t2 ) =
dt
T (3 + t) =
and the coordinates of the right hand sides with respect to S are
1
0 ,
0
0
10 ,
6
7
0
3
and
0
2 .
0
1 0 7 0
0 2 ,
A = 0 10
0 6 3 0
and this, as we found on page 27, is the matrix of T with respect
to bases B for P3 and S for P2 .
2. A mapping given by the formula T (x) = x for all x is called an
identity mapping as it leaves every vector unchanged. Consider the
identity mapping on R3 ; we shall find its matrix with respect to the
basis B = { (1, 4, 1), (0, 2, 5), (3, 1, 1) } in the domain, and the
standard basis in the codomain. Using the columns method is
30
1
1
T 4 = 4 ;
1
1
0
0
T 2 = 2 ;
5
5
3
3
T 1 = 1 ;
1
1
1
4
1
0
2
5
3
1 ,
1
1
0
3
4 2 1 .
1 5
1
This illustrates the next result, which will lead us to a third and
sometimes even more convenient method for finding the matrix of
a linear transformation.
Theorem. Let T be the identity map on a vector space V , and let B =
{ b1 , b2 , . . . , bn } be a basis for V . If A is the matrix of T with respect
to the basis B in the domain and the standard basis in the codomain,
then the columns of A are b1 , b2 , . . . , bn , written as coordinate vectors
with respect to S.
To apply this result we need to consider the composition of linear
mappings. Let T1 : V1 V2 and T2 : V2 V3 be linear transformations;
then the composition T3 , a function from V1 to V3 defined by
T3 (v) = T2 T1 (v)
is also linear. (Exercise. Prove it!) Now suppose that we have bases
B1 , B2 and B3 for V1 , V2 and V3 ; let the matrices of T1 and T2 with
31
V w.r.t.
x S
T : V w.r.t. B
W w.r.t.
x S
W w.r.t. C
M
and
for R3 and R2 , we can write down the matrices which correspond to the
left, top and right sides of the diagram:
1
1 1
1 3
1 2 3
.
P = 0
1
1 , A =
, Q=
2 1
4 3 2
1 3 0
32
1
1
1 1
1 2 3
1 3
0
1
1
M = Q1 AP =
4 3 2
2 1
1 3 0
1 4 3
2
.
=
2 11 1
5
2 4
3 5
1
1
1
1
2 0
1
1 1
4
5
11
1 1 0 =
.
2 4
2
3 5
2
0
0 1
2. The vector spaces Fn and Mn,1 (F) and M1,n (F) are all isomorphic:
this is why we can treat vectors of n components as if they were
n 1 matrices, and why it doesnt matter (out of context) whether
we regard them as row vectors or column vectors.
3. Similarly, C (as a vector space over R) is isomorphic to R2 and
to the set of geometric vectors (arrows) in 2 dimensions: this is
why we can interpret addition of complex numbers as addition of
vectors.
4. Let V be a finitedimensional vector space over F; then V is isomorphic to Fn , where n = dim V . In fact, we already know that
for any basis B of V , the map S : V Fn given by S(v) = [v]B is
linear, and it follows from remarks on page 13 that it is bijective.
We denote by L(V, W ) the set of all linear transformations from a
vector space V over a field F to a space W over the same field. You
may check that L(V, W ) is a vector space under the usual operations
of addition and scalar multiplication for functions. If dim V = n
and dim W = m, then L(V, W ) is isomorphic to Mm,n (F). In fact,
if we choose bases B and C for V and W , and if SC : W W is the
vectorstocoordinates map for W with respect to the basis C,
as in the previous paragraph, then isomorphisms in both directions
are given by
(T ) = A ,
Note that this result is certainly not true in general for nonlinear
functions: consider, for example, the functions from R to R given by
f (x) = x2
and g(x) = x + 1 .
and
1 (A) = T ,
1
the function which maps v to SC
(A[v]B ).
[T (v)]C = A[v]B
for all v V . First, suppose that A is invertible. We define a function
S from W to V : for any w W , let
S(w) = x1 b1 + + xn bn
where x = A1 [w]C .
()
where x = A1 [T (v)]C ;
(p) =
hence
[S(T (v))]B = x = A1 A[v]B = [v]B ,
and so S(T (v)) = v. Thus T is invertible (and S is its inverse).
Conversely, suppose that T is invertible; we know that T 1 is a
linear map from W to V and therefore has a matrix A . For any x Fn ,
take v = x1 b1 + + xn vn ; then
its matrix
p(s) ds ;
0
1 3 1
1 2 0
0 0 1
D(t2 ) = 2t ,
2
3
1 , 1 ,
0
0
gives the matrix of D with respect
2
A= 1
0
D(t3 ) = 3t2
2
1
1
to B and C as
3 2
1 1 .
0 1
37
v = P (1 b1 + + r br )
P A1 P 1 w = 0
P 1 w ker(A1 )
A1 P 1 w = 0
P 1 w = 1 b1 + + n bn
w = 1 (P b1 ) + + n (P bn ) span(B2 ) .
39
Therefore B2 spans ker(A2 ). Exactly as before, B2 is a linearly independent set; hence B2 is a basis for ker(A2 ), and nullity(A1 ) = nullity(A2 ).
Comments.
The matrices
1 2
A1 = 0 4
0 0
3
5
6
1
and A2 = 0
0
2 3
4 5
0 0
have different nullities (and ranks), and therefore are not similar.
Note that when A1 , A2 are similar, it is not in general true that
ker(A1 ) = ker(A2 ): the dimensions are the same, but the spaces
themselves are usually different. In other words, although the nullity is a similarity invariant, the kernel is not. In fact, it is not hard
to see from the preceding proof that if A1 = P 1 A2 P , then
ker(A2 ) = { P v | v ker(A1 ) } .
Corresponding remarks apply to the images of similar matrices.
Other examples of similarity invariants are the trace, determinant
and eigenvalues. These will be left as exercises or treated later in
the course. Note, however, that we cannot use these invariants to
prove that two matrices are similar. You may check that
A1 =
3 1
0 3
and A2 =
3 0
0 3
40