Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Analysis of a Matrix I
Chun-Yueh Chiang1
Center for General Education, National Formosa University, Huwei 632, Taiwan.
Matthew M. Lin2,
Department of Mathematics, National Chung Cheng University, Chia-Yi 621, Taiwan.
Abstract
The eigenvalue shift technique is the most well-known and fundamental tool
for matrix computations. Applications include the search of eigeninformation,
the acceleration of numerical algorithms, the study of Googles PageRank. The
shift strategy arises from the concept investigated by Brauer [1] for changing
the value of an eigenvalue of a matrix to the desired one, while keeping the
remaining eigenvalues and the original eigenvectors unchanged. The idea of
shifting distinct eigenvalues can easily be generalized by Brauers idea. However,
shifting an eigenvalue with multiple multiplicities is a challenge issue and worthy
of our investigation. In this work, we propose a new way for updating an
eigenvalue with multiple multiplicities and thoroughly analyze its corresponding
Jordan canonical form after the update procedure.
Keywords: eigenvalue shift technique, Jordan canonical form, rank-k updates.
1. Introduction
Corresponding author
Email addresses: chiang@nfu.edu.tw (Chun-Yueh Chiang), mlin@math.ccu.edu.tw
(Matthew M. Lin )
1 The first author was supported by the National Science Council of Taiwan under grant
NSC100-2115-M-150-001.
2 The second author was supported by the National Science Council of Taiwan under grant
NSC99-2115-M-194-010-MY2.
2
with V1 , V3 Cnk and V2 , V4 Cn(nk) . A natural idea of updating the
eigenvalue appearing at the top left corner of the matrix J, but keeping V and
V 1 unchanged, is to add a rank k matrix A = V1 J1 V3 to the original
matrix A so that
[ ]
Jk () + J1 0
A + A = V V 1
0 J2
For a given n n square matrix A, let (A) be the set of all eigenvalues
of A. We say that two vectors u and v in Cn are orthogonal if u v = 0. In
our study, we are seeking the orthogonality of eigenvectors of a given matrix.
The feature is know as the principle of biorthogonality and is discussed in [15,
Theorem 1.4.7] as follows:
Theorem 2.1 tells us the principle of biorthogonality with respect to any two
distinct eigenvalues. Next, We want to enhance this feature to generalized left
and right eigenvectors.
3
Theorem 2.2. [Generalized Biorthogonality Property] Let A Cnn and let
1 , 2 (A). Suppose {ui }pi=1 and {vi }qi=1 are the generalized left and right
eigenvectors corresponding to the Jordan block Jp (1 ) and Jq (2 ), respectively.
Then
(a) If 1 = 2 ,
ui vj = 0, 1 i p, 1 j q.
(b) If 1 = 2 ,
U A = Jp (1 )U , AV = V Jq (2 ).
That is,
U AV = (U V )Jq (1 ) = Jp (1 )(U V ). (2)
Dene components xi,j = ui vj ,
xi,0 = 0, and x0,j = 0, for i = 1, . . . , p and
j = 1, . . . , q. Then (2) implies that
It follows from (3) and the assumptions of xi,j that if 1 = 2 , then xi,j = 0 for
1 i p and 1 j q. This proves (a).
By (3), we see that if 1 = 2 , xi1,j = xi,j1 for 1 i p and 1 j q.
It follows that xi,j = 0, for 2 i + j max{p, q} and elements xi,j coincide
on the matrix diagonals i + j = s for any max{p, q} + 1 s p + q. This
completes the proof of (b).
Note that the values xi,j dened in the proof of Theorem 2.2 imply that if
1 = 2 , the matrix X = [xi,j ]pq is indeed a lower triangular Hankel matrix.
Also, by (1a), we have the following useful results.
Corollary 2.1. Let A Cnn and let (A). Suppose {ui }pi=1 and {vi }qi=1
are the generalized left and right eigenvectors corresponding to the Jordan block
Jp () and Jq (), respectively. Then, if p and q are even, then
p q
ui vj = 0, 1i ,1j ; (4)
2 2
if p and q are odd, then
p1 q1
ui vj = 0, up+1 vj = 0, ui v q+1 = 0, 1i ,1j . (5)
2 2 2 2
4
Note that Corollay 2.1 provides only the necessary conditions for two gen-
eralized left and right eigenvectors to be orthogonal. It is possible that two
generalized eigenvectors are orthogonal, even if they are not tted in the con-
traints given in (4) and (5). This phenomenon can be observed by the following
two examples. We rst provide all possible types of generalized eigenvectors of
a Jordan matrix with one and two Jordan blocks, respectively. We then come
up with two extreme cases for each Jordan matrix under discussion. One is with
the smallest number of orthogonal generalized left and right eigenvectors. The
other is with the largest number of orthogonal ones.
Example 2.1. Let A = J2k (). Then all of the generalized left and right eigen-
vectors {ui }2k
i=1 C
2k
and {vi }2k
i=1 C , respectively, can be written as
2k
i
i
ui = aij+1 e2kj+1 , vi = cij+1 ej , 1 i 2k,
j=1 j=1
ui vj = 0, 1 i, j k,
ui vj = 0, k + 1 i, j 2k,
ui vj = 0, i + j =
2k + 1,
ui vj = 0, i + j = 2k + 1.
Example 2.2. Let A = Jk () Jk (). Then all of the generalized left eigen-
vectors of the first Jordan block Jk () (the upper left corner) and the right gen-
eralized eigenvectors of the second Jordan block Jk () (the lower right corner)
can be written as
i
i
ui = aj ekj+1 + cj e2kj+1 , vi = cij+1 ej + dij+1 ek+j , 1 i k,
j=1 j=1
ui vj = 0, i + j < k + 1,
ui vj = 0, i + j k + 1,
ui vj = 0, 1 i, j k.
5
Given a matrix A Cnn , let us close this section with the study of the
mapping of the resolvent operator (A In )1 on its generalized left and right
eigenvectors. Evidently, the orthogonal property between two generalized left
and right eigenvectors will be inuenced after the mapping. This inuence will
play a crucial role in proposing an eigenvalue shift technique later on.
Theorem 2.3. Let A be a matrix in Cnn and let 0 (A). Suppose {ui }pi=1
and {vi }pi=1 be the generalized left and right eigenvectors corresponding to Jp (0 ).
Let be a complex number and (A). Then
(a) For 1 i p,
i
(1)ij vj
(A In )1 vi = ,
j=1
(0 )ij+1
i
(1)ij ui
ui (A In )1
= .
j=1
(0 )ij+1
v2 (A In )1 v1 (1)2j vj
2
(A In )1 v2 = = .
(0 ) j=1
(0 )2j+1
Now we have enough tools to analyze the eigenvalue shift problems. We rst
establish a result for eigenvalues with even algebraic multiplicities and then
generalize it to eigenvalues with odd algebraic multiplicities. We show how to
shift a desired eigenvalue of a given matrix without changing the remaining
ones.
6
Theorem 3.1. Let A Cnn , let 0 (A) with algebraic multiplicity 2k and
i=1 and {vi }i=1 be the left and right Jordan
geometric multiplicity 1, and let {ui }2k 2k
If matrices R Ckn and L Cnk are, respectively, the right and left inverses
of V and U satisfying
U L = R V = Ik , (7)
then the shifted matrix
b := A + (1 0 )R1 R2
A (8)
[ ] [ ]
where R1 := V L and R2 := R U , has the following properties:
7
To prove (b), we see that by Theorem 2.2,
b = u 1 and u A
The same approaches can be carried out to obtain u1 A b=
1 i+1
1 ui+1 + ui , for i = 1, . . . , k 1.
b
Next, it is natural to ask about the eigenstructure of this updated matrix A
dened in Theorem 3.1. To this end, we use the notions given in[ Theorem 3.1.]
Assume without loss of generality that A C2k2k , and that P = v1 . . . v2k
is a nonsingular matrix so that
P 1 AP = J2k (0 ),
[ ]
U
P C
2k2k
By Theorem 2.2, we know that the matrix product is a lower
[ U 1 ]
triangular Hankel matrix, where U1 = uk+1 . . . u2k . This suggests a way
for constructing matrices U and V in (6), that is,
[ ] [ ]
0 I
U =P and V = P k ,
Q 0
b = A + (1 0 )(V R + LU )
A
( ([ ] [ ]))
Ik S 1 0 S2 Q
= P J2k (0 ) + (1 0 ) + P 1
0 0 0 Ik
[ ]
J ( ) (1 0 )(S1 + S2 Q ) 1
=P k 1 P
0 Jk (1 )
That is, [ ]
b J ( ) C
A T := k 1 , (10)
0 Jk (1 )
for some matrix C Ckk . Based on (10), the next two results discuss the pos-
b To facilitate
sible eigenspace and eigenstructure types of the matrix T (i.e., A).
our discussion below, we use {ei }1ik to denote the standard basis in Rk from
now on.
8
Lemma 3.1. Let C be a matrix in Ckk . If T is a matrix given by
[ ]
Jk () C
T = , (11)
0 Jk ()
T v = v,
or, equivalently,
In this sense, we classify the possible eigenstructure types by using the notions
dened in Lemma 3.1 as follows:
Case 1. ck,1 = 0, Ce1 = 0. Then, T Jk () Jk () with two cycles of
generalized eigenvectors taken as
{[ ] [ ]}
e1 ek
1 = ,..., ,
0 0
k
[ ] [ ] ki+1
0 Nk Ce2 N Cei
2 = , , . . . , i=2 k .
e1 e2
ek
9
Case 2. ck,1 = 0, Ce1 = 0. Then T Jk ()Jk () with two cycles of generalized
eigenvectors taken as
{[ ] [ ]}
e1 ek
1 = ,..., ,
0 0
k
[ ] ki+1
Nk Ce1 N Cei
2 = , . . . , i=1 k .
e1
ek
Theorem 3.2. Let C be a matrix in Ckk . Then, the Jordan canonical form
b defined by equation (8), is either Jk () Jk () or J2k ().
of the matrix A,
Note that Theorem 3.2 also implies that the geometric multiplicity of A, b
dened in (11), is at most two. On the other hand, an analogous approach can
seemingly be used to derive a shift technique and characterize the correspond-
ing eigenstructure for the eigenvalue with odd algebraic multiplicities. Indeed,
this analysis for the odd case is much more complicated due to the orthogonal
property caused by the middle eigenvector as it can be seen in Theorem 2.2 and
a dierent approach is needed for our discussion.
Let us now consider the eigenvalue with odd algebraic multiplicities. From (1a),
we know that if p = q and p, q are odd integers, then the inner product up+1 v p+1
2 2
is
[ a constant.
] But,[ it is not clear
] whether the product is zero or not. Since
u1 . . . up and v1 . . . vp are two nonsingular matrices, it implies that
the inner product up+1 v p+1 = 0. The following result shows how to change an
2 2
eigenvalue with odd algebraic multiplicities.
Theorem 4.1. Let A Cnn , and 0 be an eigenvalue of A having algebraic
multiplicity 2k + 1 and geometric multiplicity 1, and let {ui }2k+1
i=1 and {vi }i=1
2k+1
be the left and right Jordan chains for 0 . Define two matrices
[ ] [ ]
U := u1 u2 uk , V := v1 v2 vk (13)
10
uk+1
and a vector r = . If matrices R Ckn and L Cnk are, respec-
vk+1 uk+1
tively, the right and left inverses of V and U satisfying
R V = U L = Ik , (14)
Ab := A + (1 0 )R1 R2 (15)
[ ] [ ]
where R1 := V vk+1 L and R2 := R r U , has the following properties:
Proof. Since
R (A I)1 V R (A In )1 vk+1 R (A I)1 L
1
R2 (A I)1 R1 = 0 r (A I)1 L ,
0
0 0 U (A I)1 L
it follows from Theorem 2.2 and Theorem 2.3 that the characteristic polynomial
of Ab satises
( )
det(Ab In ) = det(A In ) det In + (1 0 )R1 R2 (A In )1
( )
= det(A In ) det I2k+1 + (1 0 )R2 (A In )1 R1
( )2k+1
1
= det(A In ) ,
0
b = u 1 and u A
In an analogous way, we can obtain u1 A b = 1 u + u , for
1 i+1 i+1 i
i = 1, . . . , k 1, and (b) follows.
11
Theorem 4.1 provides us a way to update the eigenvalue of a given Jor-
dan block of odd size. Consequently, we are interested in analyzing possible
b We start this discussion in an
eigenstructure types of the shifted matrix A.
analogous way from Section 3 by using the notions dened in Theorem 4.1. As-
sume[ for simplicity that
] the given matrix A is of size (2k + 1) (2k + 1) and
P = v1 . . . v2k+1 is a nonsingular matrix such that
P 1 AP = J2k+1 (0 ).
[ ] [ ]
I 0
It follows that V = P k . Set U = P for some lower triangular Hankel
0 Q
matrix Q Ckk , and the vector vk+1 = P ek+1 , where ek+1 is a unit vector in
R2k+1 with a 1 in position k+1 and 0s elsewhere. Corresponding to formula (14)
and the vector r given in Theorem 4.1, we dene
[ ] [ ] [ ]
Ik S2 0
R=P ,L=P ,r=P ,
S1 Q w /w1
where s1 is the rst column of S1 and s2 is the last row of S2 . This implies that
after the shift approach the matrix A b is similar to an upper triangular matrix
Jk () a C
S := 0 b , (16)
0 0 Jk ()
for some matrix C Ckk and vectors a, b Ck . This above similarity property
allows us to discuss the eigeninformation of the shifted matrix A. b That is,
b
the eigeninformation of A can be studied indirectly by considering the upper
triangular matrix T of (16)
12
then S has the eigenspace
e1 Nk Ce1
span 0 , 0 , if b1 = 0, ak = 0, ck,1 = 0.
0 e1
c
e1 Nk ( ak,1 a + Ce1 )
span 0 ,
k
ck,1 ,
ak
0 e1
if b1 = 0, ak = 0, ck,1 = 0.
e1 Nk a
span 0 , 1 ,
E = (17)
0 0
if b1 = 0, ak = 0, ck,1 = 0 or b1 = 0, ak = 0.
e1 Nk (a + Ce1 ) Nk Ce1
span 0 , 1 , 0 ,
0 e1 e1
if b1 = 0, ak = 0, ck,1 = 0.
e1
0 , if b1 = 0, ak = 0.
span
0
Sv = v,
or, equivalently,
We then have the all possible types of eigenspace by discussing the cases given
in (17) step by step.
13
eigenvectors in an analogous way as we did in Section 3. We, therefore, seek
to determine the Jordan canonical forms by means of some kind of matrix
transformation so that the structure and eigeninfomration of S are simplied
and unchanged respectively.
With this in mind, we start by choosing an invertible matrix
Ik y W
Y = 0 1 z C(2k+1)(2k+1) .
0 0 Ik
where
e = [ci,j ]kk = W Jk () Jk ()W + C + yb az + (Jk () I)yz .
C
Specify the vectors a = [ai ]k1 and b = [bi ]k1 by their components, and let the
matrix D = [di,j ]kk = C + yb az + (Jk () I)yz .
Notice, rst, that if we choose
[ ] [ ]
y = y1 a1 a2 . . . ak1 and z = b2 b3 . . . bk zk ,
14
e by choosing
where W = [wi,j ]kk . We then want to eliminate the elements in C
wi+1,1 = di,1 , 1 i k 1,
wi+1,j = wi,j1 + di,j , 1 i k 1, 2 j k,
so that
From formulae (18) and (19), it follows that by choosing appropriate vectors y,
z and a matrix W , we can simplify the matrix S such that
Jk () a Ce
S Se := 0 b ,
0 0 Jk ()
where
1
j1
f1 = ei+1 Ck , fj = ei+j + s ejs Ck , 2 j k i,
b1 s=1
e e
k Cfj
j = C, 1 j k i.
ak
15
b1 ak e1 b1 ak ek 0 0 0
= 0 ,..., 0 , b1 , 0 , . . . , 0 .
0 0 0 e1 ek
Case 2. b1 = 0, ak = 0.
2a. If ck,1 = . . . = ck,k1 = 0 and ck,k = 0, then S Jk+1 Jk with the cycles
of generalized eigenvectors taken as
a k e1 a k ek 0
1 = 0 ,..., 0 , 1 ,
0 0 0
0 0 0
2 = 0 , . . . , 0 , ck,k .
ak
e1 ek1 ek
where
{
ck,i+1 ej , 1 j k,
mj =
0k1 , k + 1 j 2k i.
0(k+1)1 , 1 j < k i + 1,
[ jk+i1 ]
0 s e , k i + 1 j < 2k 2i 1,
s=0
jk+is
[ ki2 ]
nj =
0 s ejk+is , 2k 2i 1 j 2k i 1,
[
s=0
]
1 s ck,ks
ki2 ki2
s e
ks , j = 2k i,
ak s=0 s=0
1
j1
0 = 1, j = s ck,i+2+s C, 1 j k i 1.
ck,i+1 s=0
16
2c. If ck,1 = . . . = ck,k = 0, then S Jk+1 Jk with the cycles of generalized
eigenvectors taken as
a k e1 a k ek 0
1 = 0 ,..., 0 , 1 ,
0 0 0
0 0
2 = 0 ,..., 0 .
e1 ek
Case 3.b1 = 0, ak = 0.
3a. If ck,1 = . . . = ck,i = 0 and ck,i+1 = 0, for 0 i k 1, then S
J2ki Ji+1 with the cycles of generalized eigenvectors taken as
{[ ] [ ]}
m1 m2ki
1 = ,..., ,
n1 n2ki
0 0 0
2 = b1 , 0 , . . . , 0 .
0 e1 ei
where
{
ck,i+1 ej , 1 j k,
mj =
0k1 , k + 1 j 2k i.
0(k+1)1 , 1 j < k i,
[ ]
j = k i,
b1 0 ,
[ ]
jk+i1
nj =
b1 jk+i s e , k i + 1 j < 2k 2i,
jk+is
[
s=0
]
ki1
0 s ejk+is , 2k 2i j 2k i,
s=0
1
j1
0 = 1, j = s ck,i+2+s C, 1 j k i 1.
ck,i+1 s=0
Here, if k = i + 1, we should ignore the rst case and replace the third
case dene in nj by using the forth one directly.
3b. If ck,1 = . . . = ck,k = 0, then S Jk+1 Jk with the cycles of generalized
eigenvectors taken as
0 0 0
1 = b1 0 , . . . , 0 ,
0 e1 ek
e1 ek
2 = 0 ,..., 0 .
0 0
17
Case 4. b1 = 0, ak = 0.
4a. If ck,1 = 0, then S J2k J1 with the cycles of generalized eigenvectors
taken as
ck,1 e1 ck,1 ek
1 = 0 ,..., 0 ,
0 0
0
0 0
0 ,..., k ,
1 e 1 s eks+1
s=1
0
2 = 1 .
0
where
1
j1
1 = 1, j = s ck,js+1 C, for j = 2, . . . , k.
ck,1 s=1
1
j1
0 = 1, j = s ck,i+2+s C, for j = 1, . . . , k i 1.
ck,i+1 s=0
18
Here, if k = i + 1, we should ignore the second case dene in nj by using
the third one directly.
The methods developed in this paper are used to shift an eigenvalue with
algebraic multiplicities greater than 1, in the sense that the rst half of corre-
sponding generalized eigenvectors are kept unchanged. From the point of view
of applications, the approach has the advantage that one can apply this shift
technique to speed up or stabilizing a given numerical algorithm.
It is true that there are many dierent kinds of eigenvalue shift problems for
a wide range of applications in science and engineering. In the most cases, we
are required to shift partial (not all) eigenvalues of a given Jordan block, change
multiple eigenvalues simultaneously, or replace complex conjugate eigenvalues
of a real matrix. Such questions are under investigation and will be reported
elsewhere.
Acknowledgement
The authors wish to thank Professor Wen-Wei Lin (National Chiao Tung
University) for many interesting and valuable suggestions on the manuscript.
This research work is partially supported by the National Science Council and
the National Center for Theoretical Sciences in Taiwan.
References
[1] A. Brauer, Limits for the characteristic roots of a matrix. IV. Applications
to stochastic matrices, Duke Math. J. 19 (1952) 7591.
19
[2] G. H. Golub, C. F. Van Loan, Matrix computations, 3rd Edition, Johns
Hopkins Studies in the Mathematical Sciences, Johns Hopkins University
Press, Baltimore, MD, 1996.
20
doi:10.1016/j.cam.2007.05.015.
URL http://dx.doi.org/10.1016/j.cam.2007.05.015
[14] C.-H. Guo, Ecient methods for solving a nonsymmetric algebraic Riccati
equation arising in stochastic uid models, J. Comput. Appl. Math. 192 (2)
(2006) 353373. doi:10.1016/j.cam.2005.05.012.
URL http://dx.doi.org/10.1016/j.cam.2005.05.012
21