Sei sulla pagina 1di 21

The Eigenvalue Shift Technique and Its Eigenstructure

Analysis of a Matrix I

Chun-Yueh Chiang1
Center for General Education, National Formosa University, Huwei 632, Taiwan.

Matthew M. Lin2,
Department of Mathematics, National Chung Cheng University, Chia-Yi 621, Taiwan.

Abstract
The eigenvalue shift technique is the most well-known and fundamental tool
for matrix computations. Applications include the search of eigeninformation,
the acceleration of numerical algorithms, the study of Googles PageRank. The
shift strategy arises from the concept investigated by Brauer [1] for changing
the value of an eigenvalue of a matrix to the desired one, while keeping the
remaining eigenvalues and the original eigenvectors unchanged. The idea of
shifting distinct eigenvalues can easily be generalized by Brauers idea. However,
shifting an eigenvalue with multiple multiplicities is a challenge issue and worthy
of our investigation. In this work, we propose a new way for updating an
eigenvalue with multiple multiplicities and thoroughly analyze its corresponding
Jordan canonical form after the update procedure.
Keywords: eigenvalue shift technique, Jordan canonical form, rank-k updates.

1. Introduction

The eigenvalue shift technique is a will-established method of mathematics


in science and engineering. It arises from the research of eigenvalue computa-
tions. As we know, the power method is the most common and easiest algo-
rithm to nd the dominant eigenvalue for a given matrix A Cnn , but it is
impracticable to look for a specic eigenvalue of A. In order to compute one
specic eigenvalue, we need to apply the power method to the inverse of the

Corresponding author
Email addresses: chiang@nfu.edu.tw (Chun-Yueh Chiang), mlin@math.ccu.edu.tw
(Matthew M. Lin )
1 The first author was supported by the National Science Council of Taiwan under grant

NSC100-2115-M-150-001.
2 The second author was supported by the National Science Council of Taiwan under grant

NSC99-2115-M-194-010-MY2.

Preprint submitted to Elsevier May 30, 2012


shifted matrix (A In ) with an appropriately chosen shift value C. Here,
In is an n n identity matrix. This is the well-known shifted inverse power
method [2]. Early applications of eigenvalue shift techniques are focused on
the stability analysis of dynamical systems [3], the computation of the root of
a linear or non-linear equation by using preconditioning approaches [4]. Only
recently, the study of eigenvalue shift approaches has been widely applied to
the study of inverse eigenvalue problems for reconstructing a structured model
from prescribed spectral data [5, 6, 7, 8], the structure-preserving double algo-
rithm for solving nonsymmetric algebraic matrix Riccati equation [9, 10], and
the Googles PageRank problem [11, 12]. Note that the common goal of the
applications of the shift techniques stated above is to replace some unwanted
eigenvalues so that the stability or acceleration of the prescribed algorithms can
be obtained. Hence, a natural idea, the main contribution of our work, is to
propose a method for updating the eigenvalue of a given matrix and provide the
precise Jordan structures of this updated matrix.
For a matrix A Cnn and a number , if (, v) is an eigenpair of A,
then matrices A + In and A have the same eigenvector v corresponding to
the eigenvalue + and , respectively. However, these two processes are to
translate or scale all eigenvalues of A. Our work here is to change a particular
eigenvalue, which is enlightened through the following important result given
by Brauer [1].
Theorem 1.1. (Brauer). Let A be a matrix with Av = 0 v for some nonzero
vector v. If r is a vector so that r v = 1, then for any scalar 1 , the eigenvalues
of the matrix
b = A + (1 0 )vr ,
A
consist of those of A, except that one eigenvalue 0 of A is replaced by 1 .
b = 1 v.
Moreover, the eigenvector v is unchanged, that is, Av
To demonstrate our motivation, consider an n n complex matrix A. Let
A = V JV 1 be a Jordan matrix decomposition of A, where J is the Jordan
normal form of A and V is a matrix consisting of generalized eigenvectors of A.
Denote the matrix J as [ ]
Jk () 0
J= ,
0 J2
where Jk () is a k k Jordan block corresponding to eigenvalue given by

1 0 0
.. .. .
0 . . ..


Jk () ... . . . . . . . . . 0 Ckk

. ..
.. . 1
0 0
and J2 is an (n k) (n k) matrix composed of a nite direct sum of Jordan
blocks. Then, partition matrices V and V 1 conformally as
[ ] [ ]
V = V1 V2 , (V 1 ) = V3 V4 ,

2
with V1 , V3 Cnk and V2 , V4 Cn(nk) . A natural idea of updating the
eigenvalue appearing at the top left corner of the matrix J, but keeping V and
V 1 unchanged, is to add a rank k matrix A = V1 J1 V3 to the original
matrix A so that
[ ]
Jk () + J1 0
A + A = V V 1
0 J2

has the desired eigenvalue. Here, J1 is a particular k k matrix, which makes


Jk () + J1 a new Jordan matrix. Observe that the updated matrix A + A
preserves the structures of matrices V and V 1 , but changes the unwanted
eigenvalue via the new Jordan matrix J1 + J1 . In other words, if we know
the Jordan matrix decomposition in prior, we can update the eigenvalue of
A without no diculty. Note that V3 and V1 are matrices composed of the
generalized left and right eigenvectors, respectively, and V3 V1 = Ik . In practice,
given any generalized left and right eigenvectors corresponding to Jk (), the
condition V3 V1 = Ik is not true in general. Hence, the eigenvalue shift approach
stated above cannot not be applied.
In this paper, we want to investigate an eigenvalue shift technique for up-
dating the eigenvalue of the matrix A, provided that partial generalized left and
right eigenvectors are given. We show that after the shift technique the rst half
generalized eigenvectors are kept the same. Indeed, the study of the invariant
of the rst half generalized eigenvectors is essential for nding the stabilizing
solution of an algebraic Riccati Equation and is the so-called Schur method or
invariant subspace method [13, 14]. To advance our research we organize this
paper as follows. In Section 2, several useful features of the generalized left and
right eigenvectors are discussed. In particular, we investigate the principle of
generalized biorthogonality of generalized eigenvectors. This principle is then
applied to the study of the eigenvalue shift technique in Section 3 and 4. Finally,
the conclusions and some open problems are given in Section 5.

2. The Principle of Generalized Biorthogonality

For a given n n square matrix A, let (A) be the set of all eigenvalues
of A. We say that two vectors u and v in Cn are orthogonal if u v = 0. In
our study, we are seeking the orthogonality of eigenvectors of a given matrix.
The feature is know as the principle of biorthogonality and is discussed in [15,
Theorem 1.4.7] as follows:

Theorem 2.1. If A Cnn and if 1 , 2 (A), with 1 = 2 , then any left


eigenvector of A corresponding to 1 is orthogonal to any right eigenvector of
A corresponding to 2 .

Theorem 2.1 tells us the principle of biorthogonality with respect to any two
distinct eigenvalues. Next, We want to enhance this feature to generalized left
and right eigenvectors.

3
Theorem 2.2. [Generalized Biorthogonality Property] Let A Cnn and let
1 , 2 (A). Suppose {ui }pi=1 and {vi }qi=1 are the generalized left and right
eigenvectors corresponding to the Jordan block Jp (1 ) and Jq (2 ), respectively.
Then
(a) If 1 = 2 ,
ui vj = 0, 1 i p, 1 j q.

(b) If 1 = 2 ,

ui vj = ui1 vj+1 , 2 i p, 1 j q 1, (1a)


ui vj = 0, 2 i + j max{p, q}. (1b)

Proof. Let U and V be two matrices dened by

U = [ u1 ... up ] Cnp , V = [ v1 ... vq ] Cnq .

By the denitions of {ui }pi=1 and {vi }qi=1 , we have

U A = Jp (1 )U , AV = V Jq (2 ).

That is,
U AV = (U V )Jq (1 ) = Jp (1 )(U V ). (2)
Dene components xi,j = ui vj ,
xi,0 = 0, and x0,j = 0, for i = 1, . . . , p and
j = 1, . . . , q. Then (2) implies that

(1 2 )xi,j = xi1,j xi,j1 , 1 i p, 1 j q. (3)

It follows from (3) and the assumptions of xi,j that if 1 = 2 , then xi,j = 0 for
1 i p and 1 j q. This proves (a).
By (3), we see that if 1 = 2 , xi1,j = xi,j1 for 1 i p and 1 j q.
It follows that xi,j = 0, for 2 i + j max{p, q} and elements xi,j coincide
on the matrix diagonals i + j = s for any max{p, q} + 1 s p + q. This
completes the proof of (b). 

Note that the values xi,j dened in the proof of Theorem 2.2 imply that if
1 = 2 , the matrix X = [xi,j ]pq is indeed a lower triangular Hankel matrix.
Also, by (1a), we have the following useful results.
Corollary 2.1. Let A Cnn and let (A). Suppose {ui }pi=1 and {vi }qi=1
are the generalized left and right eigenvectors corresponding to the Jordan block
Jp () and Jq (), respectively. Then, if p and q are even, then
p q
ui vj = 0, 1i ,1j ; (4)
2 2
if p and q are odd, then
p1 q1
ui vj = 0, up+1 vj = 0, ui v q+1 = 0, 1i ,1j . (5)
2 2 2 2

4
Note that Corollay 2.1 provides only the necessary conditions for two gen-
eralized left and right eigenvectors to be orthogonal. It is possible that two
generalized eigenvectors are orthogonal, even if they are not tted in the con-
traints given in (4) and (5). This phenomenon can be observed by the following
two examples. We rst provide all possible types of generalized eigenvectors of
a Jordan matrix with one and two Jordan blocks, respectively. We then come
up with two extreme cases for each Jordan matrix under discussion. One is with
the smallest number of orthogonal generalized left and right eigenvectors. The
other is with the largest number of orthogonal ones.

Example 2.1. Let A = J2k (). Then all of the generalized left and right eigen-
vectors {ui }2k
i=1 C
2k
and {vi }2k
i=1 C , respectively, can be written as
2k


i
i
ui = aij+1 e2kj+1 , vi = cij+1 ej , 1 i 2k,
j=1 j=1

where ai and bi are arbitrary complex numbers and a1 b1 = 0. Moreover, if


ai = bi = 1 for all i, then

ui vj = 0, 1 i, j k,
ui vj = 0, k + 1 i, j 2k,

and if a1 = b1 = 1 and ai = bi = 0 for all i > 1, then

ui vj = 0, i + j =
2k + 1,

ui vj = 0, i + j = 2k + 1.

The second example demonstrates the orthogonal properties of the general-


ized eigenvectors between two Jordan blocks.

Example 2.2. Let A = Jk () Jk (). Then all of the generalized left eigen-
vectors of the first Jordan block Jk () (the upper left corner) and the right gen-
eralized eigenvectors of the second Jordan block Jk () (the lower right corner)
can be written as

i
i
ui = aj ekj+1 + cj e2kj+1 , vi = cij+1 ej + dij+1 ek+j , 1 i k,
j=1 j=1

respectively, where ai , bi , ci and di are arbitrary complex numbers and (a21 +


b21 )(c21 + d21 ) > 0. Moreover, if ai = bi = ci = di = 1 for all i, then for all
1 i, j k

ui vj = 0, i + j < k + 1,
ui vj = 0, i + j k + 1,

and if ai = di = 1, bi = ci = 0 for all i, then

ui vj = 0, 1 i, j k.

5
Given a matrix A Cnn , let us close this section with the study of the
mapping of the resolvent operator (A In )1 on its generalized left and right
eigenvectors. Evidently, the orthogonal property between two generalized left
and right eigenvectors will be inuenced after the mapping. This inuence will
play a crucial role in proposing an eigenvalue shift technique later on.
Theorem 2.3. Let A be a matrix in Cnn and let 0 (A). Suppose {ui }pi=1
and {vi }pi=1 be the generalized left and right eigenvectors corresponding to Jp (0 ).
Let be a complex number and (A). Then
(a) For 1 i p,


i
(1)ij vj
(A In )1 vi = ,
j=1
(0 )ij+1

i
(1)ij ui
ui (A In )1
= .
j=1
(0 )ij+1

(b) For i + j p and i, j 1, ui (A In )1 vj = 0.

Proof. Set C and (A). Since (A In )v1 = (0 )v1 , (A


1 v1
In ) v1 = . Also, (A In )v2 = (0 )v2 + v1 . It follows that
(0 )

v2 (A In )1 v1 (1)2j vj
2
(A In )1 v2 = = .
(0 ) j=1
(0 )2j+1

Subsequently, a similar proof can be given without diculty to dierent gener-


alized left and right eigenvectors, that is, (a) follows.
Apply Theorem 2.2 and part (a). It is true that ui (A In )1 vj = 0, for
i + j p and i, j 1. The result of (b) is established. 

Now we have enough tools to analyze the eigenvalue shift problems. We rst
establish a result for eigenvalues with even algebraic multiplicities and then
generalize it to eigenvalues with odd algebraic multiplicities. We show how to
shift a desired eigenvalue of a given matrix without changing the remaining
ones.

3. Eigenvalue of Even Algebraic Multiplicities

In this section, we propose a shift technique to move an eigenvalue with even


algebraic multiplicities to a desired one. We claim that based on our approach,
we can keep parts of generalized eigenvectors unchanged. We also investigate
all of the possible Jordan structures after the shift at the end of this section.

6
Theorem 3.1. Let A Cnn , let 0 (A) with algebraic multiplicity 2k and
i=1 and {vi }i=1 be the left and right Jordan
geometric multiplicity 1, and let {ui }2k 2k

chains for 0 . Define two matrices


[ ] [ ]
U := u1 u2 uk , V := v1 v2 vk . (6)

If matrices R Ckn and L Cnk are, respectively, the right and left inverses
of V and U satisfying
U L = R V = Ik , (7)
then the shifted matrix
b := A + (1 0 )R1 R2
A (8)
[ ] [ ]
where R1 := V L and R2 := R U , has the following properties:

(a) The eigenvalues of A b consist of those of A, except that the eigenvalue 0


of A is replaced by 1 .
b 1 = 1 v1 , u A
(b) Av b i+1 = 1 vi+1 + vi , u A
b = u 1 , Av b = 1 u + u , for
1 1 i+1 i+1 i
i = 1, . . . , k 1. That is,
b = V Jk (1 ),
AV
b = Jk (1 )U .
U A

Proof. This proof can be obtained by considering the characteristic polynomial


b that is, if (A), then
of A,
( )
b In ) = det(A In ) det In + (1 0 )R1 R2 (A In )1
det(A (9)
( 1
)
= det(A In ) det I2k + (1 0 )R2 (A In ) R1
( )2k
1
= det(A In ) ( by Theorem 2.2 and Theorem 2.3 ).
0

Since det(A In ) is a polynomial with a nite number of zeros, we may


choose a small perturbation > 0 such that det(A ( + )In ) = 0, that is,
A ( + )In is invertible. This implies that if (A), we can consider the
characteristic polynomial det(Ab ( + )In ) so that
( )2k
b ( + )In ) 1 ( + )
det(A = det(A ( + )In ) .
0 ( + )
Observe that both sides of the above equation are continuous functions of . By
the so-called continuity argument method, letting 0 gives that
( )2k
b 1
det(A In ) = det(A In ) .
0

It follows that (a) holds.

7
To prove (b), we see that by Theorem 2.2,

b 1 = Av1 + [(1 0 )R1 R2 ]v1 = 0 v1 + (1 0 )v1 = 1 v1 .


Av
b i+1 = Avi + [(1 0 )R1 R2 ]vi = 1 vi+1 + vi ,
Av i = 1, . . . , k 1.

b = u 1 and u A
The same approaches can be carried out to obtain u1 A b=
1 i+1

1 ui+1 + ui , for i = 1, . . . , k 1. 

b
Next, it is natural to ask about the eigenstructure of this updated matrix A
dened in Theorem 3.1. To this end, we use the notions given in[ Theorem 3.1.]
Assume without loss of generality that A C2k2k , and that P = v1 . . . v2k
is a nonsingular matrix so that

P 1 AP = J2k (0 ),
[ ]
U
P C
2k2k
By Theorem 2.2, we know that the matrix product is a lower
[ U 1 ]
triangular Hankel matrix, where U1 = uk+1 . . . u2k . This suggests a way
for constructing matrices U and V in (6), that is,
[ ] [ ]
0 I
U =P and V = P k ,
Q 0

where Q is some nonsingular and lower triangular Hankel matrix in Ckk .


Also, the right and left inverses of (7) can be denoted by
[ ] [ ]
I S2
R = P k , L = P ,
S1 Q

where S1 and S2 are arbitrary k k matrices. It follows from (8) that

b = A + (1 0 )(V R + LU )
A
( ([ ] [ ]))
Ik S 1 0 S2 Q
= P J2k (0 ) + (1 0 ) + P 1
0 0 0 Ik
[ ]
J ( ) (1 0 )(S1 + S2 Q ) 1
=P k 1 P
0 Jk (1 )

That is, [ ]
b J ( ) C
A T := k 1 , (10)
0 Jk (1 )
for some matrix C Ckk . Based on (10), the next two results discuss the pos-
b To facilitate
sible eigenspace and eigenstructure types of the matrix T (i.e., A).
our discussion below, we use {ei }1ik to denote the standard basis in Rk from
now on.

8
Lemma 3.1. Let C be a matrix in Ckk . If T is a matrix given by
[ ]
Jk () C
T = , (11)
0 Jk ()

then T has the eigenspace


{[ ] [ ]}

e1 0

span , , if ck,1 = 0, Ce1 = 0.

0 e 1

{[ ] [ ]}
e1 N Ce
E = span , k 1 , if ck,1 = 0, Ce1 = 0.

0 e1

{[ ]}

e1

span , if ck,1 = 0.
0

corresponding to the eigenvalue , where Nk = Ik Jk () and C = [ci,j ]kk .


[ ]
Proof. Given two vectors x1 and x2 in Ck1 , let v = x 1 x2 be an eigen-
vector of T with respect to . It follows that

T v = v,

or, equivalently,

(Jk () Ik )x1 = Cx2 ,


(12)
(Jk () Ik )x2 = 0.

The possible types of eigenspace can be obtained by directly solving (12).

Lemma 3.1 suggests a possible characterization of all Jordan canonical forms


of T . Note that if x is an eigenvector of the matrix T associated with a Jor-
dan block Jp (), then its corresponding Jordan canonical basis , a cycle of
generalized eigenvectors of T , can be expressed as = {vp , vp1 , . . . , v1 }, where

vi = (T )pi (x) for i < p and vp = x.

In this sense, we classify the possible eigenstructure types by using the notions
dened in Lemma 3.1 as follows:
Case 1. ck,1 = 0, Ce1 = 0. Then, T Jk () Jk () with two cycles of
generalized eigenvectors taken as
{[ ] [ ]}
e1 ek
1 = ,..., ,
0 0
k
[ ] [ ] ki+1
0 Nk Ce2 N Cei
2 = , , . . . , i=2 k .
e1 e2
ek

9
Case 2. ck,1 = 0, Ce1 = 0. Then T Jk ()Jk () with two cycles of generalized
eigenvectors taken as
{[ ] [ ]}
e1 ek
1 = ,..., ,
0 0
k
[ ] ki+1
Nk Ce1 N Cei
2 = , . . . , i=1 k .
e1
ek

Case 3. ck,1 = 0. Then T J2k () with the cycle of generalized eigenvectors


taken as
{[ ] [ ]
ck,1 e1 ck,1 ek
= ,..., ,
0 0
k
[ ] ki+1
Nk Ce1 N Cei
, . . . , i=1 k .
e1
ek

Here, we have identied all possible eigenstructure types of the matrix T .


This observation leads directly to the following result.

Theorem 3.2. Let C be a matrix in Ckk . Then, the Jordan canonical form
b defined by equation (8), is either Jk () Jk () or J2k ().
of the matrix A,

Note that Theorem 3.2 also implies that the geometric multiplicity of A, b
dened in (11), is at most two. On the other hand, an analogous approach can
seemingly be used to derive a shift technique and characterize the correspond-
ing eigenstructure for the eigenvalue with odd algebraic multiplicities. Indeed,
this analysis for the odd case is much more complicated due to the orthogonal
property caused by the middle eigenvector as it can be seen in Theorem 2.2 and
a dierent approach is needed for our discussion.

4. Eigenvalue with Odd Algebraic Multiplicities

Let us now consider the eigenvalue with odd algebraic multiplicities. From (1a),
we know that if p = q and p, q are odd integers, then the inner product up+1 v p+1
2 2
is
[ a constant.
] But,[ it is not clear
] whether the product is zero or not. Since
u1 . . . up and v1 . . . vp are two nonsingular matrices, it implies that
the inner product up+1 v p+1 = 0. The following result shows how to change an
2 2
eigenvalue with odd algebraic multiplicities.
Theorem 4.1. Let A Cnn , and 0 be an eigenvalue of A having algebraic
multiplicity 2k + 1 and geometric multiplicity 1, and let {ui }2k+1
i=1 and {vi }i=1
2k+1

be the left and right Jordan chains for 0 . Define two matrices
[ ] [ ]
U := u1 u2 uk , V := v1 v2 vk (13)

10
uk+1
and a vector r = . If matrices R Ckn and L Cnk are, respec-
vk+1 uk+1
tively, the right and left inverses of V and U satisfying

R V = U L = Ik , (14)

then the shifted matrix

Ab := A + (1 0 )R1 R2 (15)
[ ] [ ]
where R1 := V vk+1 L and R2 := R r U , has the following properties:

(a) The eigenvalues of A b consist of those of A, except that the eigenvalue 0


of A is replaced by 1 .
b 1 = 1 v1 , u A
(b) Av b = u 1 , Av
b i+1 = 1 vi+1 + vi , u A b = 1 u + u , for
1 1 i+1 i+1 i
i = 1, . . . , k 1.That is to say,
b = V Jk (1 )
AV
b = Jk (1 )U
U A

Proof. Since

R (A I)1 V R (A In )1 vk+1 R (A I)1 L
1
R2 (A I)1 R1 = 0 r (A I)1 L ,
0
0 0 U (A I)1 L

it follows from Theorem 2.2 and Theorem 2.3 that the characteristic polynomial
of Ab satises
( )
det(Ab In ) = det(A In ) det In + (1 0 )R1 R2 (A In )1
( )
= det(A In ) det I2k+1 + (1 0 )R2 (A In )1 R1
( )2k+1
1
= det(A In ) ,
0

if (A). For the case (A), a small perturbation of is applied to


make A I nonsingular. Thus, a similar proof like the one in Theorem 3.1 is
followed. This proves (a).
We see by direct computation and Theorem 2.2 that
b 1 = Av1 + [(1 0 )R1 R2 ]v1 = 0 v1 + (1 0 )v1 = 1 v1 ,
Av
[ ]
b i+1 = 0 vi+1 + vi + (1 0 )R1 ei+1 = 1 vi+1 + vi , i = 1, . . . , k 1.
Av
0

b = u 1 and u A
In an analogous way, we can obtain u1 A b = 1 u + u , for
1 i+1 i+1 i
i = 1, . . . , k 1, and (b) follows. 

11
Theorem 4.1 provides us a way to update the eigenvalue of a given Jor-
dan block of odd size. Consequently, we are interested in analyzing possible
b We start this discussion in an
eigenstructure types of the shifted matrix A.
analogous way from Section 3 by using the notions dened in Theorem 4.1. As-
sume[ for simplicity that
] the given matrix A is of size (2k + 1) (2k + 1) and
P = v1 . . . v2k+1 is a nonsingular matrix such that

P 1 AP = J2k+1 (0 ).
[ ] [ ]
I 0
It follows that V = P k . Set U = P for some lower triangular Hankel
0 Q
matrix Q Ckk , and the vector vk+1 = P ek+1 , where ek+1 is a unit vector in
R2k+1 with a 1 in position k+1 and 0s elsewhere. Corresponding to formula (14)
and the vector r given in Theorem 4.1, we dene
[ ] [ ] [ ]
Ik S2 0
R=P ,L=P ,r=P ,
S1 Q w /w1

where S1 and S2 are arbitrary matrices in C(k+1)k , w = [wi ]k1 is a vector


in Ck+1 , and w1 is the complex conjugate of w1 . Then the shifted matrix (15)
satises
b = A + (1 0 )(V R + vk+1 r + LU )
A
[ ([ ] [ ])]
Ik S 1 [ ] 0 S2 Q
= P J2k+1 (0 ) + (1 0 ) + ek+1 0 w/w1 + P 1
0 0 0 Ik
[ ]
0 [ ]
Jk (1 ) (1 0 )s1 (1 0 )(S1 Ik + Ik 0 S2 Q ) 1
= P [ ] P ,
0 1 (1 0 )(s2 + w2 . . . wk+1 )
0 0 Jk (1 )

where s1 is the rst column of S1 and s2 is the last row of S2 . This implies that
after the shift approach the matrix A b is similar to an upper triangular matrix

Jk () a C
S := 0 b , (16)
0 0 Jk ()

for some matrix C Ckk and vectors a, b Ck . This above similarity property
allows us to discuss the eigeninformation of the shifted matrix A. b That is,
b
the eigeninformation of A can be studied indirectly by considering the upper
triangular matrix T of (16)

Lemma 4.1. Let C be a matrix in Ckk and let a, b be two vectors in Ck . If


S is given by

Jk () a C
S = 0 b ,
0 0 Jk ()

12
then S has the eigenspace

e1 Nk Ce1



span 0 , 0 , if b1 = 0, ak = 0, ck,1 = 0.



0 e1





c

e1 Nk ( ak,1 a + Ce1 )

span 0 ,
k
ck,1 ,



ak

0 e1



if b1 = 0, ak = 0, ck,1 = 0.





e1 Nk a


span 0 , 1 ,
E = (17)
0 0



if b1 = 0, ak = 0, ck,1 = 0 or b1 = 0, ak = 0.





e1 Nk (a + Ce1 ) Nk Ce1



span 0 , 1 , 0 ,



0 e1 e1



if b1 = 0, ak = 0, ck,1 = 0.







e1

0 , if b1 = 0, ak = 0.

span


0

corresponding to the eigenvalue , where Nk = Ik Jk (), b1 = e


1 b, ak = ek a,
and C = [ci,j ]kk .
[ ]
Proof. Corresponding to the eigenvalue , let v = x 1 x2 x
3 be an
eigenvector of S, where x1 , x3 Ck and x2 C. It is true that the eigenvector
v satises

Sv = v,

or, equivalently,

(Jk () Ik )x1 = x2 a Cx3 ,


0x2 = b x3 ,
(Jk () Ik )x3 = 0.

We then have the all possible types of eigenspace by discussing the cases given
in (17) step by step. 

Similarly, we proceed to discuss all possible Jordan canonical forms of this


shifted matrix (15) through the discussion of this special block upper triangular
matrix S given in (16). One point should be made clear rst. Due to the
eects of vectors a and b in S. It is too complicate to evaluate the generalized

13
eigenvectors in an analogous way as we did in Section 3. We, therefore, seek
to determine the Jordan canonical forms by means of some kind of matrix
transformation so that the structure and eigeninfomration of S are simplied
and unchanged respectively.
With this in mind, we start by choosing an invertible matrix

Ik y W
Y = 0 1 z C(2k+1)(2k+1) .
0 0 Ik

It is easy to check that



Ik y W + yz
Y 1 =0 1 z .
0 0 Ik

We then transform matrix S in (16) by using Y and Y 1 such that



Jk () a (Jk () I)y Ce
Y SY 1 = 0 b + z (Jk () I) ,
0 0 Jk ()

where
e = [ci,j ]kk = W Jk () Jk ()W + C + yb az + (Jk () I)yz .
C

Specify the vectors a = [ai ]k1 and b = [bi ]k1 by their components, and let the
matrix D = [di,j ]kk = C + yb az + (Jk () I)yz .
Notice, rst, that if we choose
[ ] [ ]
y = y1 a1 a2 . . . ak1 and z = b2 b3 . . . bk zk ,

then for any y1 and zk C we have


[ ]
0 [ ]
a := a(Jk ()I)y = and b := b +z (Jk () I) = b1 0 . (18)
ak

Next, observe that



w2,1




..


. , i = 1,

wk,1



0
(W Jk () Jk ()W ) ei =

w1,i1 w2,i




..


. , 2 i k,



wk1,i1 wk,i

wk,i1

14
e by choosing
where W = [wi,j ]kk . We then want to eliminate the elements in C

wi+1,1 = di,1 , 1 i k 1,
wi+1,j = wi,j1 + di,j , 1 i k 1, 2 j k,

so that

ci,j = 0, 1 i < k, 1 j k, (19a)



j1
ck,j = dk,j , 1 j k. (19b)
=0

From formulae (18) and (19), it follows that by choosing appropriate vectors y,
z and a matrix W , we can simplify the matrix S such that

Jk () a Ce
S Se := 0 b ,
0 0 Jk ()

e = [ci,j ]kk satises ci,j = 0,


where a = ak ek , b = b1 e1 are vectors in Ck , and C
e
for 1 i k 1 and 1 j k. Note that C is a matrix with all entries equal
to zero except the ones on the last row and is said to be in lower concentrated
form [16]. This observation can be used to classify all possible eigenstructure
types of S though the discussion of S. e We list all the classications as follows:
Case 1. b1 = 0, ak = 0.
1a. If ck,1 = . . . = ck,i = 0 and ck,i+1 = 0, for 0 i k 1, then S J2k+1
with the cycle of generalized eigenvectors taken as

b1 ak e1 b1 ak ek 0
= 0 ,..., 0 , b1 ,

0 0 0

0 0 0 0
0 , . . . , 0 , 1 , . . . , ki ,

e1 ei f1 fki

where

1
j1
f1 = ei+1 Ck , fj = ei+j + s ejs Ck , 2 j k i,
b1 s=1
e e
k Cfj
j = C, 1 j k i.
ak

1b. If ck,1 = . . . = ck,k = 0, then S J2k+1 with the cycle of generalized


eigenvectors taken as

15

b1 ak e1 b1 ak ek 0 0 0
= 0 ,..., 0 , b1 , 0 , . . . , 0 .

0 0 0 e1 ek

Case 2. b1 = 0, ak = 0.
2a. If ck,1 = . . . = ck,k1 = 0 and ck,k = 0, then S Jk+1 Jk with the cycles
of generalized eigenvectors taken as

a k e1 a k ek 0
1 = 0 ,..., 0 , 1 ,

0 0 0

0 0 0
2 = 0 , . . . , 0 , ck,k .
ak
e1 ek1 ek

2b. If ck,1 = . . . = ck,i = 0 and ck,i+1 = 0, for 0 i < k 1, then S


J2ki Ji+1 with the cycles of generalized eigenvectors taken as
{[ ] [ ]}
m1 m2ki
1 = ,..., ,
n1 n2ki

0 0 0
2 = 0 , . . . , 0 , ck,i+1 .
ak
e1 ei ei+1

where
{
ck,i+1 ej , 1 j k,
mj =
0k1 , k + 1 j 2k i.


0(k+1)1 , 1 j < k i + 1,

[ jk+i1 ]





0 s e , k i + 1 j < 2k 2i 1,

s=0
jk+is
[ ki2 ]
nj =


0 s ejk+is , 2k 2i 1 j 2k i 1,



[
s=0
]


1 s ck,ks

ki2 ki2
s e
ks , j = 2k i,
ak s=0 s=0

1
j1
0 = 1, j = s ck,i+2+s C, 1 j k i 1.
ck,i+1 s=0

Here, if k = i + 2, we should ignore the second case dene in nj by using


the third one directly.

16
2c. If ck,1 = . . . = ck,k = 0, then S Jk+1 Jk with the cycles of generalized
eigenvectors taken as

a k e1 a k ek 0
1 = 0 ,..., 0 , 1 ,

0 0 0

0 0
2 = 0 ,..., 0 .

e1 ek
Case 3.b1 = 0, ak = 0.
3a. If ck,1 = . . . = ck,i = 0 and ck,i+1 = 0, for 0 i k 1, then S
J2ki Ji+1 with the cycles of generalized eigenvectors taken as
{[ ] [ ]}
m1 m2ki
1 = ,..., ,
n1 n2ki

0 0 0
2 = b1 , 0 , . . . , 0 .

0 e1 ei

where
{
ck,i+1 ej , 1 j k,
mj =
0k1 , k + 1 j 2k i.


0(k+1)1 , 1 j < k i,

[ ]

j = k i,

b1 0 ,

[ ]

jk+i1
nj =
b1 jk+i s e , k i + 1 j < 2k 2i,


jk+is

[
s=0
]




ki1
0 s ejk+is , 2k 2i j 2k i,
s=0

1
j1
0 = 1, j = s ck,i+2+s C, 1 j k i 1.
ck,i+1 s=0

Here, if k = i + 1, we should ignore the rst case and replace the third
case dene in nj by using the forth one directly.
3b. If ck,1 = . . . = ck,k = 0, then S Jk+1 Jk with the cycles of generalized
eigenvectors taken as

0 0 0
1 = b1 0 , . . . , 0 ,

0 e1 ek

e1 ek
2 = 0 ,..., 0 .

0 0

17
Case 4. b1 = 0, ak = 0.
4a. If ck,1 = 0, then S J2k J1 with the cycles of generalized eigenvectors
taken as

ck,1 e1 ck,1 ek
1 = 0 ,..., 0 ,

0 0

0

0 0
0 ,..., k ,

1 e 1 s eks+1
s=1

0
2 = 1 .

0
where

1
j1
1 = 1, j = s ck,js+1 C, for j = 2, . . . , k.
ck,1 s=1

4b. If ck,1 = . . . = ck,i = 0 and ck,i+1 = 0, for 1 i k 1, then S


J2ki Ji J1 , with the cycles of generalized eigenvectors taken as

m1 m2ki
1 = 0 ,..., 0 ,

n1 n2ki

0 0
2 = 0 ,..., 0 ,

e1 ei

0
3 = 1 ,

0
where
{
ck,i+1 ej , 1 j k,
mj =
0k1 , k + 1 j 2k i.


0(k+1)1 , 1 j < k i + 1,




jk+i1
s ejk+is , k i + 1 j < 2k 2i,
nj =


s=0


ki1

s ejk+is , 2k 2i j 2k i,
s=0

1
j1
0 = 1, j = s ck,i+2+s C, for j = 1, . . . , k i 1.
ck,i+1 s=0

18
Here, if k = i + 1, we should ignore the second case dene in nj by using
the third one directly.

4c. If ck,1 = . . . = ck,k = 0, then A Jk Jk J1 , with the cycles of


generalized eigenvectors taken as

0 0
1 = 0 ,..., 0 ,

e1 ek

e1 ek
2 = 0 ,..., 0 ,

0 0

0
3 = 1 .

0

Now, we have all possible eigenstructure types of the matrix S. In view


b
of such classications above, it turns out that the geometric multiplicity of A,
dened in (15), is at most three.

5. Conclusions and Open Problems

The methods developed in this paper are used to shift an eigenvalue with
algebraic multiplicities greater than 1, in the sense that the rst half of corre-
sponding generalized eigenvectors are kept unchanged. From the point of view
of applications, the approach has the advantage that one can apply this shift
technique to speed up or stabilizing a given numerical algorithm.
It is true that there are many dierent kinds of eigenvalue shift problems for
a wide range of applications in science and engineering. In the most cases, we
are required to shift partial (not all) eigenvalues of a given Jordan block, change
multiple eigenvalues simultaneously, or replace complex conjugate eigenvalues
of a real matrix. Such questions are under investigation and will be reported
elsewhere.

Acknowledgement

The authors wish to thank Professor Wen-Wei Lin (National Chiao Tung
University) for many interesting and valuable suggestions on the manuscript.
This research work is partially supported by the National Science Council and
the National Center for Theoretical Sciences in Taiwan.

References

[1] A. Brauer, Limits for the characteristic roots of a matrix. IV. Applications
to stochastic matrices, Duke Math. J. 19 (1952) 7591.

19
[2] G. H. Golub, C. F. Van Loan, Matrix computations, 3rd Edition, Johns
Hopkins Studies in the Mathematical Sciences, Johns Hopkins University
Press, Baltimore, MD, 1996.

[3] R. Bellman, Introduction to matrix analysis, Vol. 19 of Classics in Applied


Mathematics, Society for Industrial and Applied Mathematics (SIAM),
Philadelphia, PA, 1997, reprint of the second (1970) edition, With a fore-
word by Gene Golub.
[4] C. T. Kelley, Iterative methods for linear and nonlinear equations, Vol. 16
of Frontiers in Applied Mathematics, Society for Industrial and Applied
Mathematics (SIAM), Philadelphia, PA, 1995, with separately available
software.
[5] M. T. Chu, G. H. Golub, Structured inverse eigenvalue problems, Acta
Numer. 11 (2002) 171. doi:10.1017/S0962492902000016.
URL http://dx.doi.org/10.1017/S0962492902000016

[6] M. T. Chu, G. H. Golub, Structured inverse eigenvalue problems, Acta


Numer. 11 (2002) 171. doi:10.1017/S0962492902000016.
URL http://dx.doi.org/10.1017/S0962492902000016
[7] D. Chu, M. T. Chu, W.-W. Lin, Quadratic model updating with symmetry,
positive deniteness, and no spill-over, SIAM J. Matrix Anal. Appl. 31 (2)
(2009) 546564. doi:10.1137/080726136.
URL http://dx.doi.org/10.1137/080726136
[8] M. M. Lin, B. Dong, M. T. Chu, Semi-denite programming techniques
for structured quadratic inverse eigenvalue problems, Numer. Algorithms
53 (4) (2010) 419437. doi:10.1007/s11075-009-9309-9.
URL http://dx.doi.org/10.1007/s11075-009-9309-9
[9] C.-H. Guo, B. Iannazzo, B. Meini, On the doubling algorithm for a (shifted)
nonsymmetric algebraic Riccati equation, SIAM J. Matrix Anal. Appl.
29 (4) (2007) 10831100. doi:10.1137/060660837.
URL http://dx.doi.org/10.1137/060660837

[10] D. A. Bini, B. Iannazzo, F. Poloni, A fast Newtons method for a non-


symmetric algebraic Riccati equation, SIAM J. Matrix Anal. Appl. 30 (1)
(2008) 276290. doi:10.1137/070681478.
URL http://dx.doi.org/10.1137/070681478

[11] R. A. Horn, S. Serra-Capizzano, A general setting for the parametric


Google matrix, Internet Math. 3 (4) (2006) 385411.
URL http://projecteuclid.org/getRecord?id=euclid.im/1227025007
[12] G. Wu, Eigenvalues and Jordan canonical form of a successively rank-
one updated complex matrix with applications to Googles PageR-
ank problem, J. Comput. Appl. Math. 216 (2) (2008) 364370.

20
doi:10.1016/j.cam.2007.05.015.
URL http://dx.doi.org/10.1016/j.cam.2007.05.015

[13] A. J. Laub, A Schur method for solving algebraic Riccati equa-


tions, IEEE Trans. Automat. Control 24 (6) (1979) 913921.
doi:10.1109/TAC.1979.1102178.
URL http://dx.doi.org/10.1109/TAC.1979.1102178

[14] C.-H. Guo, Ecient methods for solving a nonsymmetric algebraic Riccati
equation arising in stochastic uid models, J. Comput. Appl. Math. 192 (2)
(2006) 353373. doi:10.1016/j.cam.2005.05.012.
URL http://dx.doi.org/10.1016/j.cam.2005.05.012

[15] R. A. Horn, C. R. Johnson, Matrix analysis, Cambridge University Press,


Cambridge, 1990, corrected reprint of the 1985 original.

[16] C. R. Johnson, E. A. Schreiner, Explicit Jordan form for certain block


triangular matrices, in: Proceedings of the First Conference of the Inter-
national Linear Algebra Society (Provo, UT, 1989), Vol. 150, 1991, pp.
297314. doi:10.1016/0024-3795(91)90176-W.
URL http://dx.doi.org/10.1016/0024-3795(91)90176-W

21

Potrebbero piacerti anche