Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
f1 (x)
f2 (x)
fn (x)
f10 (x)
f20 (x)
fn0 (x)
(n1)
(n1)
(n1)
f
(x) f2
(x) fn
(x)
1
Theorem 1.9. (Wronskis test for linear independence) Let f1 (x), f2 (x), ..., fn (x)
be real-valued functions which have (n 1) continuous derivatives on all of R . If
the Wronskian of these functions is not identically zero on R, then the functions
form a linearly independent set in C (n1) (R).
Example. Show that f1 (x) = sin2 2x, f2 (x) = cos2 2x, f3 (x) = cos 4x are linearly
dependent in C 2 (R).
Solution: One approach would be to examine the Wronskian of our functions. A
simple computation shows that the Wronskian is identically 0, hence our functions
are linearly dependent. Alternatively, since cos 4x = cos2 2x sin2 2x, it follows
that f1 (x) f2 (x) + f3 (x) = 0, and our functions are linearly dependent.
2. Linear Transformations
2.1. Definition.
Definition 2.1. Let V ,W be real vector spaces. A transformation T : V W is a
linear transformation, if for any pair , R, and u, v V , we have T (u + v) =
T (u) + T (v).
Illustration. Let V = C 1 (R) denote the continuously-differentiable real-valued
functions defined on R, and W = C 0 (R) denote the continuous real-valued functions
df
d
d
on R. The derivative operator dx
: V W , defined by dx
(f ) = dx
W for f V
df
dg
d
is linear, since dx (f + g) = dx + dx .
Example. Find the matrix representation A of the linear transformation T : R2
R2 , where T rotates each vector x R2 with basepoint at the origin clockwise by
an angle .
Solution: We must find the images T (e1 ) and T (e2 ) of the standard basis under
our transformation. It is easy to check that T (e1 ) = T ((1, 0)T ) = (cos , sin )T ,
while T (e2 ) = T ((0, 1)T) = (sin , cos )T.
cos sin
Hence our matrix A =
.
sin cos
2.2. Isomorphism.
Definition 2.2. A linear transformation T : V W is called an isomorphism if
it is one-to-one and onto, and we say a vector space V is isomorphic to W if there
is an isomorphism between V and W .
Theorem 2.3. Every real n-dimensional vector space is isomorphic to Rn .
Example. If V is an n-dimensional vector space and the transformation T : V
Rn is an isomorphism, show there exists a unique inner product < , > on V such
that T (u) T (v) =< u, v >, where T (u) T (v) denotes the Euclidean product on
Rn .
Solution: We show that < u, v > defines an inner product.
< u, v >= T (u) T (v) = T (v) T (u) =< v, u >.
< u + v, w >= T (u + v) T (w) = (T (u) + T (v)) T (w) = T (u) T (w) +
T (v) T (w) =< u, w > + < v, w >.
< ku, v >= T (ku) T (v) = kT (u) T (v) = k < u, v >.
2
Since T is an isomorphism, < v, v >= kT vk = 0 if and only if v = 0.
So < u, v > satisfies all the properties of an inner product. Uniqueness of the inner
product on V follows from the similar property held by the Euclidean dot product
on Rn .
2.3. Kernel and range, one-to-one and onto. Let T : V W be a linear
transformation. Then:
Definition 2.4. The kernel of T is the set ker(T ) := {x V | T (x) = 0}.
Definition 2.5. The range of T is the set {y W | x V such that y = T (x)}.
Definition 2.6. T is onto if its range is all of W , and one-to-one if T maps distinct
vectors in V to distinct vectors in W . We say T is an injection if if is one-to-one,
and a surjection if it is onto.
Example. Let T : V W be a linear transformation. Show that T is one-to-one
if and only if ker(T ) = {0}.
Solution: Suppose first T is one-to-one. Since T is linear, T (0) = 0. Since T is
one-to-one, 0 is the only vector for which T (0) = 0, so ker(T ) = {0}.
Next suppose ker(T ) = {0}. Further, choose x1 , x2 V such that x1 6= x2 . Then
x1 x2 is not in the kernel of T , so that T (x1 x2 ) = T (x1 ) T (x2 ) 6= 0, and T
is one-to-one.
3. Matrix Algebra
Theorem 3.1. Let T : Rn Rm be a linear transformation, and let {e1 , ..., en }
denote a basis for Rn . Then given any x Rn , we can express T (x) as a matrix
transformation T (x) = Ax, where A is the mn matrix whose i-th column is T (ei ).
Let us fix some notation. We denote the entry in the i-th row and k-th column of
A by the lowercase aij .
2 7
4
5
8
4 4
8
5
4
1 2 3
PBS = ([v1 ]S | [v2 ]S ) | [v3 ]S ) = 2 5 3 .
1 0 8
We can immediately find PSB by noting that the it must be the inverse of PSB
(why?).
1 4 2
Example. Determine whether the matrix A = 3 4 0 is diagonalizable.
3 1 3
If so, find the matrix P that diagonalizes the matrix A.
Solution: You can check that the characteristic polynomial of A is p() = (
1)( 2)( 3), so that A has three distinct eigenvalues. Since eigenvectors corresponding to distinct eigenvalues are linearly independent (check!), A has 3 linearly
independent eigenvectors and we know A is diagonalizable.
To determine P , we must find eigenvectors corresponding to the eigenvalues =
1, 2, 3. The reader can check that that these eigenvectors are v1 = (1, 1, 1)T ,
v2 = (2,3, 3)T , and v3 = (1, 3, 4)T . Hence one choice of the matrix P is given
1 2 1
by P = 1 3 3 .
1 3 4
3.7. Orthogonal diagonalizability.
Definition 3.16. A square matrix A is orthogonally diagonalizable if there exists
an orthogonal matrix P for which P T AP is a diagonal matrix.
Theorem 3.17. A matrix is orthogonally diagonalizable if and only if it is symmetric.
Example. Prove that if A is a symmetric matrix, then eigenvectors from different
eigenspaces are orthogonal.
Solution. Let v1 and v2 be eigenvectors corresponding to distinct eigenvalues
1 , 2 . Consider 1 v1 v2 = (1 v1 )T v2 = (Av1 )T v2 = v1T AT v2 . Since A is symmetric, v1T AT v2 = v1T Av2 = v1T 2 v2 = 2 v1 v2 . This implies (1 2 )v1 v2 = 0, which
in turn tells us v1 v2 = 0.
3.8. Quadratic forms.
Definition 3.18. Let A be a real n n matrix, and x Rn . Then
the real-valued
a11 a12
T
function x Ax is called a quadratic form. For example, if A =
, then
a21 a22
the quadratic form associated with A is a11 x21 + a22 x22 + 2a12 a21 x1 x2 .
2
0 36
3
0 , compute exp(tA).
Example. Given A = 0
36 0 23
Solution. We leave as an exercise to show that the eigenvalues are = 3, 25, 50,
with corresponding eigenvectors v1 = (0, 1, 0)T , v2 = (4/5, 0, 3/5)T , and v3 =
0 4/5 3/5
0
0 ,
(3/5, 0, 4/5)T . It follows that the matrix P that diagonalizes A is P = 1
0 3/5 4/5
and A = P diag(3, 25, 50)P T .
From our theorem, it follows that exp (tA) = P exp (tdiag(3, 25, 50))P T , and
exp (tdiag(3, 25, 50)) = diag(exp (3t), exp (25t), exp (50t)). It is easy to verify that
16
9
12
0
12
25 exp (25t) + 25 exp (50t)
25 exp (25t) + 25 exp (50t)
0
exp (3t)
0
exp (tA) =
12
9
16
25
exp (25t) + 12
exp
(50t)
0
exp
(25t)
+
exp
(50t)
25
25
25
a11 a12
denote an arbitrary 2 2 matrix.
a21 a22
Recall that the determinant of A is defined by det(A) := a11 a22 a12 a21 . More
generally, let A be a square n n matrix, and denote the entry in the i-th row and
j-th column by aij .
{j1 , j2 , ..., jn } of {1, 2, ..., n}, where the sign is + if the permutation is even, and
- if the permutation is odd.
0
x1
1 0
using Cramers rule.
=
Example. Solve
1
x2
2 1
0 0
1 0
det
det
1 1
2 1
0
= = 0,
= 1.
Solution : x1 =
x2 =
1
1 0
1 0
det
det
2 1
2 1
3.13. Formula for A1 .
Definition 3.25. If A is a square matrix, then the minor of entry aij is denoted
by Mij , and is defined to be the determinant of the submatrix that remains when
the i-th row and j-th column are deleted. The number Cij = (1)i+j Mij is called
the cofactor of entry aij .
9
1
Theorem 3.27. If A is invertible, then its inverse is given by A1 = det(A)
adj(A) =
1
T
det(A) C .
2 0 3
Example. Find the inverse of A = 0 3 2 using Theorem 1.6.
2 0 4
Solution: First we compute the determinant, expanding along the first row,
3 2
0
=2 det
0 det
0 4
2
2
4
+ 3 det
0 3
2 0
=2(12) + 3(6)
= 6.
We can similarily obtain the remaining Cij which determine the adjoint. Finally
we find that the inverse is:
12 0 9
1
A1 = 4 2 4 .
6
6
0
6
2) , and
3 3
,
P1 P4 = (3, 1)T . Placing these vectors as the columns of the matrix A =
2 1
by our theorem we know that the area of our parallelogram is given by | det(A) |= 3.
3.15. Cross product.
Definition 3.29. Let u = (u1 , u2 , u3 )T , v = (v1 , v2 , v3 )T . The cross product of u
with v, denoted u v, is the vector
u2 u3
u1 u3
u1 u2
u v := (det
, det
, det
)T .
v2 v3
v1 v3
v1 v2
Example. For u = (1, 0, 2)T , v = (3, 1, 0)T , compute u v.
Solution: By the definition,
u v = ((0 0 2 1), (3 2 1 0), (1 1 0 (3)))T = (2, 6, 1)T .
10
4 0 1
Example. Find all eigenvalues of the matrix A = 2 1 0 , and the corre2 0 1
sponding eigenvectors.
Solution: We note that is an eigenvalue provided the equation (A Id )x = 0
has a solution for some non-zero x, where Id denotes the identity matrix. This is
only possible if solves the characteristic equation det(A Id ) = 0.
For the matrix A in our example, the characteristic equation reads (check!) 3
62 + 11 6 = ( 1)( 2)( 3) = 0, which has solutions = 1, 2, 3.
Next, to determine the eigenvectors corresponding to = 1, we must solve the
system (A Id )x = 0 for non-zero x. In other words, we solve
3 0 1
x1
0
2 0 0 x2 = 0 .
2 0 0
x3
0
Using your favourite solution method, you can easily determine that one eigenvector
is (x1 , x2 , x2 )T = (0, 1, 0)T . Similarily, we find an eigenvector corresponding to =
2 is (1, 2, 2)T , and for = 3 the eigenvector is (1, 1, 1)T . Finally, it is important
to note that scalar multiples of any of these eigenvectors is also an eigenvector,
so we have actually determined a subspace of eigenvectors corresponding to each
eigenvalue (referred to as the eigenspace of ).
Definition 4.3. If n is a positive integer, then a complex n-tuple is a sequence of
n complex numbers (v1 , ..., vn ). The set of all complex n-tuples is called complex
n-space and is denoted by C n .
Definition 4.4. If u = (u1 , u2 , ..., un ) and v = (v1 , v2 , ..., vn ) are vectors in C n ,
then the complex Euclidean dot (inner) product of u and
v is defined u v :=
u1 v1 + u2 v2 + ... + un vn . The Euclidean norm is kvk := v v.
Definition 4.5. A complex matrix A is a matrix whose entries are complex numbers. Further, we define the complex conjugate of a matrix A, denoted A, to be
the matrix whose entries are the complex conjugates of the entries of A. That is,
if A has entries aij , then A has entries aij .
11
4 5
Example. Given A =
, determine the eigenvalues and find bases for
1 0
the corresponding eigenspaces.
Solution: It is left as an exercise to check that the characteristic equation is
2 4 + 5 = 0, so the eigenvalues are = 2 i.
Let
us determinethe eigenspace corresponding to = 2 + i. We must solve
2 + i
5
(x, y)T = (0, 0)T . Since we know this system must have a non1
2+i
zero solution, it follows that one of the rows in the reduced matrix must have a row
of zeros. Hence we need only solve (2 + i)x + 5y = 0. which has as solution the
eigenvector (x, y) = ( 2+i
5 , 1), which spans the eigenspace of = 2 + i.
It is a good exercise for the reader to check that for a complex eigenvalue with
corresponding eigenvector x, it is always true that is another eigenvalue with
corresponding eigenvector x. Hence ( 2i
5 , 1) is a basis for the eigenspace corresponding to = 2 i.
4.3. Generalized Eigenspaces.
Definition 4.7. Let A be a complex n n matrix, with distinct eigenvalues
{1 , 2 , ..., k }. The generalized eigenspace Vi pertaining to i is defined by
Vi = {x Cn | (A i I)n x = 0}. In particular, all eigenvectors corresponding to i are in Vi .
Theorem 4.8. Let A be a complex nn matrix with distinct eigenvalues {1 , 2 , ..., k }
and corresponding invariant subspaces Vi , i = 1, ..., k. Then:
(1) Vi is invariant under A, in the sense that AVi Vi for i = 1, ..., k.
(2) The spaces Vi are mutually linearly independent.
(3) dimVi = m(i ), where m(i ) is the multiplicity of the eigenvalue i .
(4) A is similar to a block diagonal matrix with k blocks A1 , ..., Ak .
4.4. Jordan Normal Form.
Definition 4.9. Let C. A Jordan block Jk () is a k k upper-triangular
matrix of the form
1 0
0
0 1 0 0
Jk () = . . .
0 0
1
0 0 0
Definition 4.10. A Jordan matrix is any matrix of the form
Jn1 (1 )
0
..
J =
.
0
Jnk (k )
where each Jni (i ) is a Jordan block, and n1 + n2 + + nk = n.
12
Jn1 (1 )
0
1
1
..
A=S
S = SJS ,
.
0
Jnk (k )
where each Jni (i ) is a Jordan block, and n1 + n2 + + nk = n. The eigenvalues
are not necessarily distinct, though if A is real with real eigenvalues, then S can be
taken to be real.
5. Inner product spaces
5.1. Inner product.
Definition 5.1. An inner product on a real vector space V is a function that
associates a unique real number < u, v > to each pair of vectors u, v V , in such
a way that the following properties hold for all u, v, w V and scalars k:
(1)
(2)
(3)
(4)
A real vector space equipped with an inner product is called a real inner product
space.
Illustration. The most familiar example of an inner product space is Rn , equipped
with the Euclidean dot product as inner product. That is, for v, w Rn , we define
the dot product v w := ni=1 v i wi .
Example. Let V = C([0, 2]), the continuous real-valued functions defined on the
closed interval [0, 2]. We make V into an inner product space by defining an inner
R 2
product < f, g >:= 0 f (x)g(x)dx, for any two functions f, g V .
Suppose p and q are distinct non-zero integers. Show that f (x) = sin qx and
g(x) = cos px are orthogonal with respect to the inner product.
Solution: Using the identity cos px sin qx = sin (p + q)x sin (p q)x, we see that
R 2
R 2
< f, g >= 0 cos px sin qxdx = 0 [sin (p + q)x sin (p q)x]dx = 0 0 = 0, as
required.
5.2. Norms, Cauchy-Schwarz inequality.
Definition 5.2. If V is an inner product space, then we define the norm of v V
by kvk = < v, v >, and the distance between u and v by d(u, v) = ku vk.
Theorem 5.3. (Pythagoras) If u, v V are orthogonal with respect to the inner
2
2
product, then ku + vk = kuk + kvk .
Theorem 5.4. (Cauchy-Schwarz Inequality) If u, v are vectors in an inner product
space V , then |< u, v >| kuk kvk .
Theorem 5.5. (Triangle Inequality) If u, v, w are vectors in an inner product space,
then ku + vk kuk + kvk.
13