Sei sulla pagina 1di 13

BIT Numerical Mathematics (2008) 48: 769781

Published online: 2 December 2008 c Springer 2008


DOI: 10.1007/s10543-008-0200-1
RELATIVE EIGENVALUE AND SINGULAR VALUE
PERTURBATIONS OF SCALED DIAGONALLY
DOMINANT MATRICES
,
J. MATEJA

S
1
and V. HARI
2
1
Faculty of Economics, University of Zagreb, Kennedyjev trg 6, 10000 Zagreb, Croatia.
email: josip.matejas@kr.htnet.hr
2
Department of Mathematics, University of Zagreb, P.O. Box 335, 10002 Zagreb, Croatia.
email: hari@math.hr
Abstract.
The paper derives improved relative perturbation bounds for the eigenvalues of scaled
diagonally dominant Hermitian matrices and new relative perturbation bounds for
the singular values of symmetrically scaled diagonally dominant square matrices. The
perturbation result for the singular values enlarges the class of well-behaved matrices
for accurate computation of the singular values.
AMS subject classication (2000): 65F15.
Key words: Hermitian matrix, eigenvalues, scaled diagonally dominant matrix, singu-
lar values, symmetric scaling, relative perturbations.
1 Introduction and notation.
Incentive for this research came with our intention to make a sound accuracy
proof for the Kogbetliantz method. Since this is a two-sided Jacobi-like method,
we have tried to nd a relative perturbation result for the singular values of
a square matrix G, which expresses the bound by the condition cond(B), where
G = DBD and D is diagonal. The Kogbetliantz method typically starts with
a triangular G, which is obtained after preprocessing by one or two QR factor-
izations, as is advocated in [5] and [6]. Such G is more diagonal than before
preprocessing. So, we looked for some perturbation result for the singular values
of scaled diagonally dominant (s.d.d.) matrices.
In their pioneering paper on s.d.d. matrices, Barlow and Demmel [1] con-
sidered symmetric scaling, but only for the eigenvalue problem of symmetric
matrices. From [3] and from the overview papers [13, 25], one nds out that for

Received March 22, 2008. Accepted in revised form October 28, 2008. Communicated by
Axel Ruhe.

This work was supported by the Croatian Ministry of Science, Education and Sports, grant
037-0372783-3042.
770 J. MATEJA

S AND V. HARI
the relative perturbations of the singular values, the known results use either
the one-sided scaling or the general two-sided scaling G = D
1
BD
2
, combined
with the assumptions on (all minors of) B which guarantee the existence of
a rank-revealing decomposition of G. The well behaved matrices for the singular
value computation are also the matrices which satisfy some analytic conditions
or some sparsity and sign pattern, as well as some rationally structured matrices
and some nite element matrices.
So, we tried to derive a new result for s.d.d. square matrices.
We have started with the eigenvalue results from [1, Theorem 2, Proposition 4].
We have discovered that we can relax the assumptions and can get rid of the
factor n (as well as of the exponential function) in the bounds for the relative per-
turbations of the eigenvalues of indenite s.d.d. symmetric matrices. The proof
uses a new technique which can be applied to similar perturbation problems.
The result extends to some other classes of scaled diagonally dominant matri-
ces, such as skew-Hermitian and hidden Hermitian matrices, the latter having
the form D
1
HD
2
with H Hermitian and D
1
, D
2
diagonal, such that D
1
D
2
is
positive denite.
With this simpler and somewhat sharper result for the indenite s.d.d. Hermit-
ian matrices, using the standard technique with the Wielandt matrix, we have
derived a new, equally simple result for the singular values. This result enlarges
the class of well-behaved matrices to those square s.d.d. matrices G which have
the form DBD with D diagonal and B of small condition.
Although, the results presented here have their own signicance, typical ap-
plications lie in the accurate computation of the eigenvalues of s.d.d. Hermitian
matrices and of the singular values of s.d.d. square matrices. In particular, none
of the two approaches elaborated in [24, 27, 26] and in [4] for accurate compu-
tation of the eigenvalues of an indenite Hermitian matrix H is needed if H is
s.d.d. The simple two-sided Jacobi method will accurately deliver the eigenvalues
(see [17]). As our preliminary results imply, we are condent that for the accu-
rate singular value computation of s.d.d. triangular matrices, the Kogbetliantz
method will be excellent.
Throughout the paper, we use the following notation. By C
nn
is denoted
the set of nn complex matrices and by C
n
the set of complex column-vectors
with n components. For any square matrix X, diag(X) stands for the diagonal
part of X, and (X) = X diag(X) for the o-diagonal part of X. By |X|
and |X|
F
we denote the spectral and the Frobenius (Euclidean) matrix norm
of X, respectively. The Euclidean vector norm is also denoted by | |. If not
specied otherwise, in this paper, the default choice of norm is the spectral
norm. For a Hermitian matrix H,
i
(H) denotes the i-th largest eigenvalue
of H. The largest and the smallest eigenvalue of H are also denoted by
max
(H)
and
min
(H). Similarly,
i
(G) denotes the i-th largest singular value of G, while

max
(G) and
min
(G) denote its largest and smallest singular value. The absolute
value of X = (x
ij
) is the matrix [X[ = ([x
ij
[).
The paper is organized as follows. In Section 2 we briey recall the denition of
scaled diagonally dominant matrices. In Section 3, we prove the perturbation the-
orem for indenite s.d.d. Hermitian matrices, make comparison with the existing
RELATIVE PERTURBATIONS OF SDD MATRICES 771
result and present its immediate corollaries. In Section 4, we extend the result
to the relative perturbations of the singular values of s.d.d. square matrices.
2 Scaled diagonally dominant matrices.
Here, we recall the notion of scaled diagonally dominant matrices (see [1]).
Let G C
nn
, G = (g
ij
). Then G is -diagonally dominant with respect to
a norm | | if |(G)| min
1in
[g
ii
[, with 0 < 1. Now, let B C
nn
with [b
ii
[ = 1, 1 i n and let D
L
, D
R
be arbitrary nonsingular diagonal
matrices. Then G = D
L
BD
R
is -scaled diagonally dominant (-s.d.d.) with
respect to a given norm, if B is -diagonally dominant with respect to that
norm. If G is Hermitian (i.e. G = G

), it is presumed that D
L
= D

R
. Note that
an -s.d.d. matrix has nonzero diagonal elements.
In this paper, we consider the two-sided scaling under the constraint D

L
=
D
R
= D, where D is chosen to make the absolute values of the diagonal elements
of B = D

GD
1
one. For such scaling the notion of the scaled diagonally
dominant matrix is dened in the same way as for the Hermitian matrix.
Let H C
nn
be a Hermitian matrix with the non-zero diagonal elements.
Then the eigenvalues and the diagonal elements of H are real and
A = [diag(H)[
1/2
H[diag(H)[
1/2
is the scaled matrix H. The diagonal elements of A are ones or minus ones. If
|(A)| < 1, then H is -s.d.d. Since < 1, A and consequently H must
be nonsingular.
Each -s.d.d. matrix has a special structure (see [9, 18, 8]). This structure has
impact on the rate of convergence of the appropriate diagonalization methods
(see [15, 16, 19, 10, 20, 23]). Here we consider only the perturbation properties
of the eigenvalues and singular values of such matrices.
3 Perturbation results for s.d.d. Hermitian matrices.
Let HC
nn
be a Hermitian nonsingular matrix and H a Hermitian perturb-
ation of H. Let
1

2

n
and

n
be the eigenvalues
of H and H+H, respectively. Then the standard perturbation theory (see [22])
yields the following result:
If |H| |H|, < 1, then

|H|
[
i
[
(H), 1 i n,
where (H) = |H||H
1
| is the condition number of H. The following stronger
result has been proven for the positive denite matrices in [2]:
If |A| <
min
(A), then

min
(A)
(A), 1 i n, (3.1)
where A = D
1
HD
1
, A = D
1
HD
1
and D = [diag(H)]
1/2
.
772 J. MATEJA

S AND V. HARI
Since almost diagonal Hermitian matrices have many properties common to
positive denite ones, the question arises whether a similar result to (3.1) holds
for -s.d.d. indenite Hermitian matrices. The answer is given in [1, Prop-
osition 4, Theorem 4] and here is our improvement of these results.
Theorem 3.1. Let H and H be Hermitian matrices of order n and let
1

2

n
and

n
be the eigenvalues of H and H + H,
respectively. Let D = [diag(H)[
1/2
be nonsingular and let A = D
1
HD
1
, A =
D
1
HD
1
. Let , be real numbers such that |(A)| and |A| .
If + 2 < 1 then

i
1

1 +

12



1 2
, 1 i n. (3.2)
If H = 0, then the rst inequality is strict provided that > |A| or > 0. If
> 0, the second inequality is strict if and only if > 0.
Proof. Let
=

1 2
, =

1 (1 +)
. (3.3)
The assumption +2 < 1 implies 2 < 1, so is well dened. It further implies
< 1 and ,
which proves the second inequality in the assertion of the theorem. Note that
= if and only if = 0 or = 0, so < if and only if H is not diagonal
and H = 0.
Since generally, for > 0,

a
b
1

if and only if [1 sgn(b)]b a [1 + sgn(b)]b, b = 0,


we have to prove the inequalities
[1 sgn(
i
)]
i

i
[1 + sgn(
i
)]
i
, 1 i n. (3.4)
To avoid confusion about using and , we note that serves only to simplify
the expression of the bound in the relation (3.2). In fact, we could have used
instead of in (3.4). But using it throughout in the proof, would result in the
second (weaker) estimate in (3.2), [

i
/
i
1[ . Since , we proceed
with (3.4) to obtain the rst (sharper) estimate [

i
/
i
1[ .
The assumption |A| is equivalent to the condition
[x

Hx[ x

D
2
x, x C
n
.
Indeed, this claim follows from the following consideration. Note that
[y

Ay[
y

y
=
[x

Hx[
x

D
2
x
, y = Dx, x = 0.
RELATIVE PERTURBATIONS OF SDD MATRICES 773
Since D
2
is positive denite, y runs through the whole C
n
` 0 if and only
if x does the same. Hence the both ratios assume the same values, and for some
values of x and y they assume the maximum value which is |A|.
Hence
x

(H D
2
)x x

(H +H)x x

(H +D
2
)x, x C
n
.
Recalling the monotonicity property of the eigenvalues of Hermitian matrices,
one easily obtains

i
(H D
2
)

i

i
(H +D
2
), 1 i n. (3.5)
The inequalities are attainable for H = D
2
. Here we have assumed the non-
increasing ordering of the eigenvalues, as it has been noted in the introduction.
The rest of the proof is based on the following idea.
If H is positive denite, then in the transition from H to HD
2
, the diagonal
elements are shifted by the factor 1. We shall construct the positive semidef-
inite matrix M such that in the transition fromH to HD
2
+M the eigenvalues
are shifted by the factor 1. This is achieved by setting M = ()D
2
+(H)
(see Corollary 3.3). Then we have H D
2
+M = (1 )H.
If H is Hermitian indenite, then in the transition from H to H + D
2
, the
positive (negative) diagonal elements are shifted by the factor 1 + (1 ).
Similarly, in the transition from H to HD
2
, the negative (positive) diagonal
elements are shifted by 1 + (1 ). In these cases the matrix M is constructed
according to the block partition of H as follows.
We can assume that the diagonal elements of H satisfy
h
11
h
22
h
mm
> 0 > h
m+1,m+1
h
nn
.
Otherwise, we can consider the matrix P
T
HP, where P is an appropriate per-
mutation matrix. So, m > 0 and nm > 0 are numbers of positive and negative
diagonal elements, respectively. According to the partition of n, n = m+(nm),
we dene the block-matrix partition
H =

H
11
H
12
H

12
H
22

, A =

A
11
A
12
A

12
A
22

, D =

D
1
O
O D
2

,
where H
11
, A
11
, D
1
C
mm
and H
22
, A
22
, D
2
C
(nm)(nm)
. Note that
diag(H) = diag(D
2
1
, D
2
2
). The Hermitian matrix M is constructed in the fol-
lowing way
M = ( )D
2
+

(H
11
) (

1
2
1)H
12
(

1
2
1)H

12
(H
22
)

. (3.6)
Let us inspect whether M is positive semidenite. We have
(M) =

I
m
O
O I
nm

(H
11
) O
O (H
22
)


2
1 +

1
2

O H
12
H

12
O

,
774 J. MATEJA

S AND V. HARI
and using the relation (3.3), we obtain
|(D
1
MD
1
)|

(A
11
) O
O (A
22
)

+

2
1 +

1
2

O A
12
A

12
O

+
2
= (1 +) (1 +)
= [1 1 + (1 +)] = .
Since diag(D
1
MD
1
) = ( )I, we conclude that D
1
MD
1
is positive
semidenite. The law of inertia implies that M is also positive semidenite. We
also see that M is positive denite as soon as > 0 and > 0, that is, as soon
as H = 0 and H is not diagonal.
Next, we consider the matrix H +D
2
+M. Since M is positive semidenite,
we have

i
(H +D
2
)
i
(H +D
2
+M), 1 i n. (3.7)
Here, we have strict inequalities provided that H = 0 and H is not diagonal.
It is easy to see that
H +D
2
+M =

(1 +)H
11

1
2
H
12

1
2
H

12
(1 )H
22

= H, (3.8)
where
=

1 +I
m
O
O

1 I
nm

is positive denite. Note that


1
(
2
) = 1 + ,
n
(
2
) = 1 . Recalling the
perturbation theorem of Ostrowski (see [12, Theorem 4.5.9]), we obtain

i
(H) =
i

i
(H)

(1 +)
i
if
i
> 0
(1 )
i
if
i
< 0
, 1 i n. (3.9)
Combining the relations (3.5), (3.7), (3.8) and (3.9), we obtain

i

i
(H) [1 + sgn(
i
)]
i
, 1 i n,
which is the second inequality in the assertion (3.4).
The proof of the rst inequality in (3.4) uses a similar series of conclusions,
based on the relations

M = ( )D
2
+

(H
11
) (1

1
2
)H
12
(1

1
2
)H

12
(H
22
)

(3.10)
RELATIVE PERTURBATIONS OF SDD MATRICES 775
and
H D
2


M =

(1 )H
11

1
2
H
12

1
2
H

12
(1 +)H
22

=

H

,
where

=

1 I
m
O
O

1 +I
nm

.
3.1 Comparison with bound of Barlow and Demmel.
Our bound in Theorem 3.1 is sharper than the bound [1, Proposition 4] by
a factor n.
In our notation [1, Theorem 4] reads:
Let H be symmetric -s.d.d. matrix with respect to the 2-norm. Assume that
K

= H +H is -s.d.d. for all 0 1. Then


e

1

i
e

1
implying

i
1

e

1
1 , 1 i n. (3.11)
We have the following observations.
1. Let A

= [diag(K

)[
1/2
K

[diag(K

)[
1/2
. Then A
0
= A, where A is from
Theorem 3.1, and let = |(A)|. We always have
= |(A
0
)| .
If we assume that = max
01
|(A

)|, then is not simple to compute.


Hence for given H and H, one can use a value for which is an overestimate
of max
01
|(A

)|. This further increases the distance between and .


2. Let = /(1 2) < 1 and < 1. Then
= e

1
1 =

1
+

2
2(1 )
2
+

3
6(1 )
3
+
=

1 (1 +)
=

1
+r, r =

2

(1 )(1 2)[1 (1 +)]


.
If = 0, then = = 0. If > 0 and = 0, then < .
Let > 0, > 0 and let us estimate for what , < holds. Since ,
we have /(1 ) /(1 ). Now, if r
2
/[2(1 )
2
] then < will
hold. Therefore, we estimate the ratio
r :

2
2(1 )
2
r :

2
2(1 )
2
=
2(1 )
(1 2)[1 (1 +)]
<
2(1 )
(1 2)
2
.
If the value of this ratio is not greater than 1 then we have < . One
easily nds out that 2(1 ) (1 2)
2
holds whenever
0
,
0
=
(3

3)/6 0.2113.
776 J. MATEJA

S AND V. HARI
Therefore, for any -s.d.d. Hermitian matrix H with 0 <
0
, and for
any Hermitian H satisfying 0 < |A| < 1 2, we have < , which
means that the estimate of Theorem 3.1 is sharper than the corresponding
estimate of [1, Theorem 4].
But if is not close to , the same conclusion can hold for >
0
. Generally,
if >
0
, then the relation between and may depend on the structure
of the perturbation H, i.e. sometimes one is larger than the other or vice
versa, i.e. the comparison between the two bounds is not clear.
3.2 Consequences.
Let us note that D in Theorem 3.1 can be taken complex. Indeed, let D =
D
H
, where D
H
= [diag(H)[
1/2
and is diagonal and unitary. It then suces
to apply Theorem 3.1 to

H and

H and use the relations


(

H) = D
H
(

A)D
H
, |

A| = |A|,
i
(H) =
i
(

H),

H = D
H
(

A)D
H
, |

A| = |A|,
i
(H

) =
i
(

),
where 1 i n. Here, H

= H +H. The same redenition of D can be made


in the ensuing corollaries.
If H and H are skew-Hermitian (H

= H, H

= H), then Theorem 3.1


can be applied to H and H, which are Hermitian. Here =

1. Since the
multiplication by does not aect the norms, the statement of Theorem 3.1
trivially holds for skew-Hermitian matrices. If A is a real skew-symmetric matrix
of even order, which is almost in Murnagham form (see [21]), then A has to
be transformed by n/2 complex Jacobi rotations in the planes (2j 1, 2j). Then
a
2j1,2j
become diagonal elements and the results are obtained from the so
obtained Hermitian matrix.
Our rst corollary has an important application in connection with the accur-
acy of the Jacoobi method for indenite symmetric matrices (see [17]). In this
application is bounded by a small multiple of |(A)|
F
, where A is the scaled
iteration at the beginning of the sweep (see [17, Remark 11]). As the process
advances, = |(A)|
F
tends to zero, so in the later stage of the process <1
and < . The estimate (3.11) can be applied too, but as explained above, in
this application Theorem 3.1 will be sharper. However, Corollary 3.2 below, is
most appropriate to apply, since |(A)|
F
is easier to compute than |(A)| and
in addition, is bounded by a multiple of |(A)|
F
, and nally Corollary 3.2
yields sharper estimate then Theorem 3.1.
Corollary 3.2. If, in Theorem 3.1, the condition |(A)| is replaced
by |(A)|
F
, then + < 1 implies [

i
/
i
1[ /(1 ), 1 i n.
Proof. In the proof of Theorem 3.1, we have = /(1 ) < 1 and we do
not use . Using the relation (3.6), we easily obtain
|(D
1
MD
1
)|
F
max

,

2
1 +

1
2

|(A)|
F
= .
RELATIVE PERTURBATIONS OF SDD MATRICES 777
By help of (3.10), the same estimate is obtained for

M. Hence M and

M are
positive semidenite. The rest of the proof remains the same.
In the case of denite Hermitian matrices, the assertion of the theorem can
be further improved to yield the bound which is similar, although somewhat
weaker than the bound in [2]. Namely, in (3.1),
min
(A) is replaced by 1

max
((A)).
Corollary 3.3. If, in Theorem 3.1, H is (positive or negative) denite, then
+ < 1 implies

i
1


1
, 1 i n. (3.12)
Proof. We can assume that H is positive denite, otherwise we con-
sider H. In the proof of Theorem 3.1 we set = = /(1 ) < 1. Since now
diag(H) = D
2
, instead of using the relations (3.6) and (3.10), we dene
M =

M = ( )D
2
+(H).
We have
|(D
1
MD
1
)| = |(A)| = ,
which implies that M and

M are positive semidenite. Next, we have
H +D
2
+M = (1 +)H = H, =

1 +I,
H D
2


M = (1 )H =

H

,

=

1 I.
The rest of the proof follows the lines of the proof of Theorem 3.1.
The following corollary gives the relative perturbation result for the hidden
Hermitian matrices, i.e. those of the form D
1
HD
2
where D
1
and D
2
are diag-
onal and nonsingular.
Corollary 3.4. Let K = D
1
HD
2
C
nn
, K = D
1
HD
2
C
nn
, where
H and H are Hermitian and D
1
, D
2
diagonal such that D
1
D
2
is positive def-
inite. Then K and K + K have real eigenvalues. Let
1

2

n
and

n
be the eigenvalues of K and K + K, respectively. Let
D = [diag(H)[
1/2
be nonsingular and let A = D
1
HD
1
, A = D
1
HD
1
. Let
, be real numbers such that |(A)| and |A| . If +2 < 1, then the
relation (3.2) holds. If the condition |(A)| is replaced with |(A)|
F
,
then + < 1 implies the relation (3.12). The same conclusion holds provided
that H is denite and + < 1, where again |(A)| .
Proof. Let D
1
= [D
1
[, D
2
= [D
2
[

, where is diagonal and unitary. Let


= [D
1
[

1
2
[D
2
[
1/2
, D
3
= [D
1
D
2
]
1/2
= [D
1
[
1/2
[D
2
[
1/2
= [D
1
[,
778 J. MATEJA

S AND V. HARI
and let N = K
1
and N = K
1
. Then the eigenvalues of N and
N +N are
i
and

i
, 1 i n. Since
N = K
1
= [D
1
[
1/2
[D
2
[
1/2
(D
1
HD
2
)[D
1
[
1/2
[D
2
[
1/2
= D
3
H

D
3
= D
3
D(A

)DD
3
= D
4
A

D
4
,
N = K
1
= [D
1
[
1/2
[D
2
[
1/2
(D
1
HD
2
)[D
1
[
1/2
[D
2
[
1/2
= D
3
H

D
3
= D
3
D(A

)DD
3
= D
4
A

D
4
,
where A

= A

, A

= A

and D
4
= DD
3
. Since for any unitary invari-
ant norm |A

| = |A| and |A

| = |A|, the assertions of the corollary follow


by applying Theorem 3.1, Corollary 3.2 and Corollary 3.3 to N and N +N.
4 Relative perturbations of the singular values.
Here, we derive new relative perturbation estimates for the singular values of
an -s.d.d. square matrix G.
Theorem 4.1. Let G and G be square matrices of order n and let
1

2


n
and

n
be the singular values of G and G

= G +G,
respectively. Let D = [diag(G)[
1/2
be nonsingular and let B = D
1
GD
1
, B =
D
1
GD
1
. Let , be real numbers such that |(B)| and |B| .
If + 2 < 1, then G and G

are nonsingular and

i
1

1 +

12


1 2
, 1 i n.
If G = 0, then the rst inequality is strict provided that > |B| or > 0. If
> 0, the second inequality is strict if and only if > 0.
Proof. First, let us show that G and G

are nonsingular. Indeed, since

min
(B) 1 > 0 and
min
(B + B) > 1 > 0,
B and B

= B + B are nonsingular. Hence G = DBD and G

= DB

D as
products of three nonsingular matrices are nonsingular. The rest of the proof
uses the technique from the proof of [18, Lemma 2]. However, instead of frequent
referring the lines of that proof, we shall provide here a complete proof of the
theorem.
Note that

G, where = diag(e
i arg(g
11
)
, . . . , e
i arg(g
nn
)
), has nonnegative
diagonal elements. Let

G =

G and

G

. Now, consider the unitary


matrix
U =

0
0 I
n

I
n
I
n
I
n
I
n

=
1


I
n
I
n

RELATIVE PERTURBATIONS OF SDD MATRICES 779


and the Hermitian matrices
H = U

0 G
G

U =
1
2


G+

G


G

G

G

G

) (

G+

G

, (4.1)
H

= U

0 G

[G

U =
1
2

+ [

) (

+ [

. (4.2)
The eigenvalues of H are (cf. [7, Section 8.6])
1

n
>
n

1
and the diagonal elements are [g
ii
[, 1 i n in the rst n positions and [g
ii
[,
1 i n in the last n positions on the diagonal. The eigenvalues of H

are

n
>

1
. Let us now show that H is -s.d.d. if and
only if G is -s.d.d. To this end it suces to prove
|(A)| = |(B)|, H = A, = [diag(H)[
1/2
. (4.3)
Obviously,
=

D 0
0 D

.
A straightforward calculation yields U = U. Hence
A =
1
H
1
= U

1
U U

O G
G

U U

1
U = U

O B
B

U.
This agrees with the fact that diag(

B) = I, which holds because

and D
commute. An inspection of the relation (4.1) reveals that
(A) = U

O (B)
(B)

U
which implies (4.3).
It remains to prove that |A| , where A =
1
(H

H)
1
, and to apply
Theorem 3.1 to H and H

. But this follows from the relation (4.1) and (4.2),


since
U
1
H
1
U

0 B
B

and

0 B
B

= |B| .
Note that using Corollary 3.2 instead of Theorem 3.1 yields the implication

2 + < 1

i
1

2
, 1 i n, where |(B)|
F
.
Finally, using the argument from the rst paragraph of Section 3.2, in The-
orem 4.1 one can use any complex diagonal D such that G = D

BD and
G = D

BD and [b
ii
[ = 1 for all 1 i n.
780 J. MATEJA

S AND V. HARI
A typical application of Theorem 4.1 lies in obtaining sharp accuracy estimates
of the Kogbetliantz method for computing the SVD of triangular matrices. As
mentioned in the introduction, after one or two initial QR factorizations, the
obtained matrix will be more diagonal than the starting one. Applying after-
wards the Kogbetliantz method, the iterated matrix will soon become scaled
diagonally dominant. In [11] sharp accuracy estimates have been derived for the
rotational parameters and for the updated diagonal elements corresponding to
one Kogbetliantz step. With these results, which include the case of a general
and of an -s.d.d. triangular matrix, one can obtain sharp accuracy estimates
for one step and for one sweep of the method. In the early stage of the process
the proof uses the technique from [14]. But in the later stage, when the iterated
matrix becomes -s.d.d., it is appropriate to make the estimates dependent on
which tends to zero. At this stage, the analysis resembles to the one in [17] and
Theorem 4.1 is used.
We have used MATLAB to see how good are the bounds in Theorems 3.1
and 4.1. The tests conrmed that they are good upper estimates of the real
change of the eigenvalues and singular values.
Acknowledgement.
The authors are thankful to the anonymous referees for their helpful sugges-
tions how to make the paper better.
REFERENCES
1. J. Barlow and J. Demmel, Computing accurate eigensystems of scaled diagonally dominant
matrices, SIAM J. Numer. Anal., 27 (1990), pp. 762791.
2. J. Demmel and K. Veselic, Jacobis method is more accurate than QR, SIAM J. Matrix
Anal. Appl., 13 (1992), pp. 12041245.
3. J. Demmel, M. Gu, S. Eisenstat, I. Slapnicar, K. Veselic, and Z. Drmac, Computing the
singular value decomposition with high relative accuracy, Linear Algebra Appl., 299 (1999),
pp. 2180, also LAPACK Working Note 119.
4. F. M. Dopico, J. M. Molera, and J. Moro, An orthogonal high relative accuracy algorithm
for the symmetric eigenproblem, SIAM J. Matrix Anal. Appl., 25(2) (2004), pp. 301351.
5. Z. Drmac and K. Veselic, New fast and accurate Jacobi SVD algorithm I, SIAM J. Matrix
Anal. Appl., 29(4) (2008), pp. 13221342.
6. Z. Drmac and K. Veselic, New fast and accurate Jacobi SVD algorithm II, SIAM J. Matrix
Anal. Appl., 29(4) (2008), pp. 13431362.
7. G. H. Golub and C. F. van Loan, Matrix Computations, 3rd edn., The Johns Hopkins
University Press, Baltimore, 1996.
8. V. Hari, Structure of almost diagonal matrices, Math. Commun., 4 (1999), pp. 135158.
9. V. Hari and Z. Drmac, On scaled almost diagonal Hermitian matrix pairs, SIAM J. Matrix
Anal. Appl., 18(4) (1997), pp. 10001012.
10. V. Hari and J. Matejas, Quadratic convergence of scaled iterates by Kogbetliantz method,
Comput. Suppl., 16 (2003), pp. 83105.
11. V. Hari and J. Matejas, Accuracy of two SVD algorithms for 22 triangular matrices,
proposed for publication in Appl. Math. Comput.
RELATIVE PERTURBATIONS OF SDD MATRICES 781
12. R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge,
1987.
13. I. Ipsen, Relative perturbation results for matrix eigenvalues and singular values, Acta
Numerica, 8 (1998), pp. 151201.
14. T. Londre and N. H. Rhee, Numerical stability of the parallel Jacobi method, SIAM J.
Matrix Anal. Appl., 26(4) (2005), pp. 9851000.
15. J. Matejas, Quadratic convergence of scaled matrices in Jacobi method, Numer. Math.,
87(1) (2000), pp. 171199.
16. J. Matejas, Convergence of scaled iterates by the Jacobi method, Linear Algebra Appl.,
349 (2002), pp. 1753.
17. J. Matejas, Accuracy of the Jacobi method on scaled diagonally dominant symmetric
matrices, to appear in SIAM J. Matrix Anal. Appl.
18. J. Matejas and V. Hari, Scaled almost diagonal matrices with multiple singular values, Z.
Angew. Math. Mech., 78(2) (1998), pp. 121131.
19. J. Matejas and V. Hari, Scaled iterates by Kogbetliantz method, in Proceedings of the 1st
Conference on Applied Mathematics and Computations (Dubrovnik, Croatia, September
1318, 1999), Publisher Dept. of Mathematics, University of Zagreb, 2001, pp. 120.
20. J. Matejas and V. Hari, Quadratic convergence estimate of scaled iterates by J-symmetric
Jacobi method, Linear Algebra Appl., 417 (2006), pp. 434465.
21. F. D. Murnaghan and A. Wintner, A canonical form for real matrices under orthogonal
transformations, Proc. Natl. Acad. Sci. USA, 17 (1931), pp. 417420.
22. B. Parlett, The Symmetric Eigenvalue Problem, Prentice Hall, Englewood Clis, NJ, 1980.
23. A. Ruhe, On the quadratic convergence of the Jacobi method for normal matrices, BIT, 7
(1967), pp. 305313.
24. I. Slapnicar, Accurate Symmetric Eigenreduction by a Jacobi Method, Ph.D. thesis, Uni-
versity of Hagen, 1992.
25. I. Slapnicar, Accurate computation of singular values and eigenvalues of symmetric matri-
ces, Math. Commun., 1(2) (1996), pp. 153168.
26. K. Veselic, An eigenreduction algorithm for denite matrix pairs and its applications to
overdamped linear systems, Numer. Math., 64 (1993), pp. 241269.
27. K. Veselic and I. Slapnicar, Floating-point perturbations of Hermitian matrices, Linear
Algebra Appl., 195 (1993), pp. 81116.

Potrebbero piacerti anche