Sei sulla pagina 1di 177

STRUCTURAL DYNAMICS, VOL. 9, Version 0.

1
Computational Dynamics
Sren R. K. Nielsen

1

2

3

k1

k+1
P()
y() = P(
k
) +
_
P(
k
) P(
k1
)
_

k

k1
Aalborg tekniske Universitetsforlag
May 2003
Contents
6 LINEAR EIGENVALUE PROBLEMS 7
6.1 Formulation of Linear Eigenvalue Problems . . . . . . . . . . . . . . . . . . . 7
6.2 Characteristic Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
6.3 Eigenvalue Separation Principle . . . . . . . . . . . . . . . . . . . . . . . . . 24
6.4 Shift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.5 Transformation of GEVP to SEVP . . . . . . . . . . . . . . . . . . . . . . . . 29
6.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7 APPROXIMATE SOLUTION METHODS 33
7.1 Static Condensation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
7.2 Rayleigh-Ritz Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
7.3 Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
8 VECTOR ITERATION METHODS 53
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
8.2 Inverse and Forward Vector Iteration . . . . . . . . . . . . . . . . . . . . . . . 54
8.3 Shift in Vector Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
8.4 Inverse Vector Iteration with Rayleigh Quotient Shift . . . . . . . . . . . . . . 70
8.5 Vector Iteration with Gram-Schmidt Orthogonalization . . . . . . . . . . . . . 73
8.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
9 SIMILARITY TRANSFORMATION METHODS 79
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
9.2 Special Jacobi Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
9.3 General Jacobi Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
9.4 Householder Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
9.5 QR Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
9.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
3
4 Contents
10 SOLUTION OF LARGE EIGENVALUE PROBLEMS 107
10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
10.2 Simultaneous Inverse Vector Iteration . . . . . . . . . . . . . . . . . . . . . . 109
10.3 Subspace Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
10.4 Characteristic Polynomial Iteration . . . . . . . . . . . . . . . . . . . . . . . . 123
10.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
11 INDEX 131
A Solutions to Exercises 133
A.1 Exercise 6.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
A.2 Exercise 6.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
A.3 Exercise 6.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
A.4 Exercise 6.4: Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
A.5 Exercise 7.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
A.6 Exercise 7.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
A.7 Exercise 7.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
A.8 Exercise 8.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
A.9 Exercise 8.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
A.10 Exercise 9.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
A.11 Exercise 9.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
A.12 Exercise 10.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
A.13 Exercise 10.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
A.14 Exercise 10.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Preface
This text has been prepared for the course on Computational Mechanics given at the 8th semester
at the structural engineering program in civil engineering at Aalborg University.
The rather weird pagination starting with Chapter 6 reects the fact that only the latter half of
the text dealing with eigenvalue analysis has been completed by March 2003. The rst part,
dealing with subjects such as numerical analysis of Fourier series, Fourier and Laplace trans-
forms, and numerical integration of dynamic equations of motion will not be ready until March
2004.
Answers to all exercises given at the end of each chapter can be downloaded from the home
page of the course at the address: www.civil.auc.dk/i5/engelsk/dyn/index/htm
Aalborg University, May 2003
Sren R.K. Nielsen
5
6 Contents
CHAPTER 6
LINEAR EIGENVALUE PROBLEMS
6.1 Formulation of Linear Eigenvalue Problems
The basic equation of motion for forced vibrations of a linear viscous damped n degree-of-
freedom system reads
1
M x +C x +Kx = f(t) , t > 0
x(0) = x
0
, x(0) = x
0
_
(61)
x(t) is the vector of displacement from the static equilibrium state, and f(t) is the dynamic load
vector. K, Mand C denotes the stiffness matrix, mass matrix and damping matrices, all of the
dimension nn. For any vector a = 0 these fulll the following positive denite and symmetry
properties
a
T
Ka > 0 , K = K
T
a
T
Ma > 0 , M= M
T
a
T
Ca > 0
_

_
(62)
If the structural systemis not supported against stiff-body motions, the stiffness matrix is merely
positive semi-denite, so a
T
Ka 0. Correspondingly, if some degrees of freedom are not
carrying kinetic energy (pseudo degrees of freedom with zero mass or zero mass moment of
inertia), the mass matrix is merely positive semi-denite, so a
T
Ma 0. The positive denite
property of the damping matrix is a formal statement of the physical property that any non-
zero velocity of the system should be related with energy dissipation. C need not fulll any
symmetry properties. However, energy dissipation is conned to the symmetric part. So-called
aeroelastic loads are external dynamic loads depending on the structural deformation, which
are often assumed to be proportional to the structural velocity, i.e. f(t) = C
a
x(t). If the
aeroelastic damping matrix C
a
is absorbed in the total damping matrix C, no denite property
can be stated for the latter matrix.
1
S.R.K. Nielsen: Vibration Theory, Vol. 1. Linear Vibration Theory. Aalborg tekniske Universitetsforlag, 1998.
7
8 Chapter 6 LINEAR EIGENVALUE PROBLEMS
Undamped eigenvibrations
_
C = 0, f(t) 0
_
are obtained as linear independent solutions to
the homogeneous matrix differential equation
M x +Kx = 0 (63)
Solutions are searched for on the form
x(t) =
(j)
e
i
j
t
(64)
where i =

1 is the complex unit. Insertion of (6-4) into (6-3) provides the following homo-
geneous system of linear equations for the determination of the amplitude
(j)
and the unknown
constant
j
_
K
j
M
_

(j)
= 0 ,
j
=
2
j
(65)
(6-5) is a so-called generalized eigenvalue problem (GEVP). If M = I, where I is the identity
matrix, the eigenvalue problem is referred to as a special eigenvalue problem (SEVP).
The necessary condition for non-trivial solutions (i.e.
(j)
= 0) is that the determinant of the
coefcient matrix is different from zero. This lead to the characteristic equation
P() = det
_
KM
_
= 0 (66)
P() is known as the characteristic polynomial. This may be expanded as
P() = a
0

n
+ a
1

n1
+ + a
n1
+ a
n
(67)
The constants a
0
, a
1
,. . . ,a
n
are known as the invariants of the GEVP. This designation stems
from the fact that the characteristic polynomial (6-7) is invariant under any rotation of the co-
ordinate system. Obviously, a
0
= (1)
n
det(M), and a
n
= det(K). The nth order equation
(6-6) determines n solutions,
1
,
2
,. . . ,
n
.
Assume that either Mor Kare positive denite. Then, all eigenvalues
j
are non-negative real,
which may be ordered in ascending magnitude as follows
0
1

2

n1

n
(68)

n
= , if det(M) = 0. Similarly,
1
= 0, if det(K) = 0. The eigenvalues are denotes as
simple, if
1
<
2
< <
n1
<
n
. The undamped circular eigenfrequencies are related to
the eigenvalues as follows
6.1 Formulation of Linear Eigenvalue Problems 9

j
=
_

j
(69)
The corresponding solutions for the amplitude functions,
(1)
,. . . ,
(n)
, are denoted the un-
damped eigenmodes of the system, which are real as well.
The eigenvalue problems (6-5) can be assembled into following matrix formulation
K
_

(1)

(2)

(n)

= M
_

(1)

(2)

(n)

1
0 0
0
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
n
_

_

K = M (610)
where
=
_

1
0 0
0
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
n
_

_
(611)
and is the so-called modal matrix of dimension n n, dened as
=
_

(1)

(2)

(n)

(612)
If the eigenvalues are simple, the eigenmodes fulll the following orthogonality properties
1

(i) T
M
(j)
=
_
0 , i = j
M
i
, i = j
(613)

(i) T
K
(j)
=
_
0 , i = j

2
i
M
i
, i = j
(614)
where M
i
denotes the modal mass.
The orthogonality properties (6-13) can be assembled in the following matrix formulation
_

(1)

(2)

(n)

T
M
_

(1)

(2)

(n)

=
_

_
M
1
0 0
0 M
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 M
n
_

T
M = m (615)
10 Chapter 6 LINEAR EIGENVALUE PROBLEMS
where
m=
_

_
M
1
0 0
0 M
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 M
n
_

_
(616)
The corresponding grouping of the orthogonality properties (6-14) reads

T
K = k (617)
where
k =
_

2
1
M
1
0 0
0
2
2
M
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
2
n
M
n
_

_
(618)
If the eigenvalues are all simple, the eigenmodes become linear independent, which means that
the inverse
1
exists.
In the following it is generally assumed that the eigenmodes are normalized to unit modal mass,
so m= I. For the special eigenvalue problem, where M= I, it then follows from (6-12) that

1
=
T
(619)
A matrix fullling (6-19) is known as orthonormal or unitary, and species a rotation of the
coordinate system. All column and row vectors have the length 1, and are mutually orthogonal.
It follows from (6-15) and (6-17) that in case of simple eigenvalues a so-called similarity trans-
formation exists, dened by the modal matrix , that reduce the mass and stiffness matrices to
a diagonal form. In case of multiple eigenvalues the problem becomes considerable more com-
plicated. For the standard eigenvalue problem with multiple eigenvalues it can be shown that
the stiffness matrix merely reduces to the so-called Jordan normal form under the considered
similarity transformation, given as follows
k =
_

_
k
1
0 0
0 k
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 k
m
_

_
(620)
where m n denotes the number of different eigenvalues, and k
i
signies the so-called Jordan
boxes, which are block matrices of the form
6.1 Formulation of Linear Eigenvalue Problems 11

2
i
,
_

2
i
1
0
2
i
_
,
_

2
i
1 0
0
2
i
1
0 0
2
i
_

_
,
_

2
i
1 0 0
0
2
i
1 0
0 0
2
i
1
0 0 0
2
i
_

_
, . . . (621)
The equations of motion (6-1) may be reformulated on the following state vector form of cou-
pled 1st order differential equations
A z +Bz = F(t) , t > 0
z(0) = z
0
_
(622)
z(t) =
_
x(t)
x(t)
_
, z
0
=
_
x
0
x
0
_
, F(t) =
_
f(t)
0
_
, A =
_
C M
M 0
_
, B =
_
K 0
0 M
_
(623)
Damped eigenvibrations are obtained as linear independent solutions to the homogeneous ma-
trix differential equation
A z +Bz = 0 (624)
Analog to (6-4) solutions are searched for on the form
z(t) =
(j)
e

j
t
(625)
Insertion of (6-25) into (6-24) provides the following homogeneous system of linear equations
for the determination of the amplitude
(j)
and the unknown constant
j
_

j
A+B
_

(j)
= 0 (626)
(6-26) is a GEVP of the dimension 2n. The principal difference to (6-5) is that neither A nor
B are positive denite matrices. For this reason the damped eigenvalues,
j
, and the damped
eigenmodes,
(j)
, are generally complex. Upon complex conjugation of (6-26) it is seen, that
if (, ) denotes an eigen-pair (solution) to (6-26), then (

) is also an eigen-pair, where *


denotes complex conjugation. Hence, all eigen-pairs are either real, or mutually complex con-
jugates. Since, the orthogonality conditions (6-13) and (6-14) rely on the symmetry properties
of the mass- and stiffness matrices, and the matrix A is generally not symmetric, similar or-
thogonality properties for the damped eigenmodes does not hold. Instead, the so-called adjoint
eigenvalue problem to (6-26) is considered
12 Chapter 6 LINEAR EIGENVALUE PROBLEMS
_

i
A
T
+B
T
_

(i)
a
= 0 (627)
It can be shown that the eigenvalues of the direct and adjoint eigenvalue problem are identical,
i.e.
i
=
i
. Moreover, the eigenmodes of the direct and adjoint eigenvalue problems fulll the
following orthogonality properties, see proof below

(i) T
a
A
(j)
=
_
0 , i = j
m
i
, i = j
(628)

(i) T
a
B
(j)
=
_
0 , i = j

i
m
i
, i = j
(629)
m
i
denotes the damped modal mass.
(6-28) and (6-29) can be assembled into the matrix formulations

T
a
A = a (630)

T
a
B = b (631)
where and
a
are modal matrices of dimension 2n 2n, dened as
= [
(1)

(2)

(2n)
] (632)

a
= [
(1)
a

(2)
a

(n)
a
] (633)
and
a =
_

_
m
1
0 0
0 m
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 m
2n
_

_
(634)
b =
_

1
m
1
0 0
0
2
m
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
2n
m
2n
_

_
(635)
6.1 Formulation of Linear Eigenvalue Problems 13
It follows from (6-30) that the direct and adjoint modal matrices are related as follows

a
= (A
1
)
T
(
1
)
T
a
T
= (A
T
)
1
(
T
)
1
a (636)
If Aand Bare symmetric, the direct and adjoint eigenvalue problems (6-26) and (6-27) become
identical. Such a problem is denoted as self-adjoint. Obviously, (6-5) is self-adjoint.
In what follows we shall primarily consider the GEVP (6-5), i.e. the involved system matrices
are symmetric and non-negative denite.
Box 6.1: Proof of orthogonality properties of damped modes
(6-26) is pre-multiplied with
(i) T
a
, and (6-27) is pre-multiplied with
(j) T
, lead-
ing to the identities

(i) T
a
A
(j)
+
(i) T
a
B
(j)
= 0 (637)

(j) T
A
T

(i)
a
+
(j) T
B
T

(i)
a
= 0

(i) T
a
A
(j)
+
(i) T
a
B
(j)
= 0 (638)
The last statement follows from transposing the previous one. Withdrawal of (6-38) from
(6-37) provides
_

i
_

(i) T
a
A
(j)
= 0 (639)
Since,
(i) T
a
A
(i)
= 0, (6-39) can only be fullled for i = j, if
i
=
i
.
Next, presuming simple eigenvalues, so
i
=
j
for i = j, (6-39) can only be fullled, if

(i) T
a
A
(j)
= 0, corresponding to (6-28).
Finally, it then follows from (6-37) that
(i) T
a
B
(j)
= 0 for i = j, and that

(i) T
a
B
(i)
=
i
m
i
for i = j, corresponding to (6-29).
14 Chapter 6 LINEAR EIGENVALUE PROBLEMS
Example 6.1: Verication of eigensolutions
Given the following mass- and stiffness matrices
M=
_
5
4
0
0
1
5
_
, K =
_
5 2
2 2
_
(640)
Verify that the eigensolutions with modal masses normalized to 1 are given by
=
_

2
1
0
0
2
2
_
=
_
2 0
0 12
_
, =
_

(1)

(2)
_
=
_
4
5
2
5
1 2
_
(641)
Based on the proposed eigensolutions the following calculations are performed, cf. (6-10)
K =
_
5 2
2 2
__
4
5
2
5
1 2
_
=
_
2 6
2
5

24
5
_
M =
_
5
4
0
0
1
5
__
4
5
2
5
1 2
__
2 0
0 12
_
=
_
2 6
2
5

24
5
_
_

_
(642)
This proofs the validity of the proposed eigensolutions. The orthonormality follows from the following calcula-
tions, cf. (6-15) and (6-17)

T
M =
_
4
5
2
5
1 2
_
T
_
5
4
0
0
1
5
__
4
5
2
5
1 2
_
=
_
1 0
0 1
_

T
K =
_
4
5
2
5
1 2
_
T
_
5 2
2 2
__
4
5
2
5
1 2
_
=
_
2 0
0 12
_
_

_
(643)
Example 6.2: M- and K-orthogonal vectors
Given the following mass- and stiffness matrices
M=
_

_
1
2
0 0
0 1 0
0 0
1
2
_

_ , K =
_

_
2 1 0
1 4 1
0 1 2
_

_ (644)
Additionally, the following vectors are considered
v
1
=
_

_
1

2
2
0
_

_ , v
2
=
_

_
1

2
2
0
_

_ (645)
From (6-45) the following matrix is formed
6.1 Formulation of Linear Eigenvalue Problems 15
V = [v
1
v
2
] =
_

_
1 1

2
2

2
2
0 0
_

_ (646)
We may then perform the following calculations, cf. (6-15) and (6-17)
V
T
MV =
_

_
1 1

2
2

2
2
0 0
_

_
T _

_
1
2
0 0
0 1 0
0 0
1
2
_

_
_

_
1 1

2
2

2
2
0 0
_

_ =
_
1 0
0 1
_
V
T
KV =
_

_
1 1

2
2

2
2
0 0
_

_
T _

_
2 1 0
1 4 1
0 1 2
_

_
_

_
1 1

2
2

2
2
0 0
_

_ =
_
2.5858 0
0 5.4142
_
_

_
(647)
(6-47) shows that the vectors v
1
and v
2
are mutual orthogonal with weights Mand K, and that both have been
normalized to unit modal mass. As will be shown in Example 6.3 neither v
1
nor v
2
are eigenmodes, and the eigen-
values are different from 2.5858 and 5.4142. However, if three linear independent vectors are mutual orthogonal
with weights Mand K, they will be eigenmodes to the system.
Example 6.3: Analytical calculation of eigensolutions
The mass- and stiffness matrices dened in Example 6.2 are considered again. Now, an analytical solution of the
eigenmodes and eigenvalues is wanted.
The generalized eigenvalue problem (6-5) becomes
_

_
2
1
2

j
1 0
1 4
j
1
0 1 2
1
2

j
_

_
_

(j)
1

(j)
2

(j)
3
_

_ =
_

_
0
0
0
_

_ (648)
The characteristic equation (6-6) becomes
P() = det
_
_
_
_

_
2
1
2

j
1 0
1 4
j
1
0 1 2
1
2

j
_

_
_
_
_ =
_
2
1
2

j
__
_
4
j
_
_
2
1
2

j
_
1
_
+ 1
_

_
2
j
1
2
__
=
_
2
1
2

j
__
6 4
j
+
1
2

2
j
_
= 0

j
=
_

_
2 , j = 1
4 , j = 2
6 , j = 3
(649)
16 Chapter 6 LINEAR EIGENVALUE PROBLEMS
Initially, the eigenmodes are normalized by setting an arbitrary component to 1. Here we shall choose
(j)
3
= 1.
The remaining components
(j)
1
and
(j)
2
are then determined from any two of the three equations (6-48). The
rst and the second equations are chosen, corresponding to
_
2
1
2

j
1
1 4
j
__

(j)
1

(j)
2
_
=
_
0
1
_

_

(j)
1

(j)
2

(j)
3
_

_ =
_

_
2
148
j
+
2
j
4
j
148
j
+
2
j
1
_

_
(650)
The modal matrix with eigenmodes normalized as indicated in (6-50) is denoted as

. This becomes

=
_

_
1 1 1
1 0 1
1 1 1
_

_ (651)
The modal masses become, cf. (6-15)
m=

T
M

=
_

_
2 0 0
0 1 0
0 0 2
_

_ (652)

(1)
denotes the 1st eigenmode normalized to unit modal mass. This is related to

(1)
in the following way

(1)
=
1

M
1

(1)
=
1

2
_

_
1
1
1
_

_ (653)
The other modes are treated in the same manner, which results in the following eigensolutions
=
_

2
1
0 0
0
2
2
0
0 0
2
3
_

_ =
_

_
2 0 0
0 4 0
0 0 6
_

_ , =
_

(1)

(2)

(3)
_
=
_

2
2
1

2
2

2
2
0

2
2

2
2
1

2
2
_

_ (654)
Example 6.4: Undamped and damped eigenvibrations of 2DOF system
1 kg 2 kg
x
1
x
2
100 N/m 200 N/m 300 N/m
3 kg/s 2 kg/s 1 kg/s
Fig. 61 Two-degrees-of-freedom system.
6.1 Formulation of Linear Eigenvalue Problems 17
The system shown on Fig. 6-1 has the indicated two degrees of freedom x
1
and x
2
. The corresponding mass-
damping and stiffness matrices become
M=
_
1 0
0 2
_
kg , C =
_
5 2
2 3
_
kg
s
, K =
_
300 200
200 500
_
N
m
(655)
The eigensolutions with modal masses normalized to 1 are given as
=
_

2
1
0
0
2
2
_
=
_
131.39 0
0 418.61
_
s
2
, =
_

(1)

(2)
_
=
_
0.64262 0.76618
0.54177 0.45440
_
(656)
The matrices Aand B dened by (6-23) become
A =
_

_
5 2 1 0
2 3 0 2
1 0 0 0
0 2 0 0
_

_
, B =
_

_
300 200 0 0
200 500 0 0
0 0 1 0
0 0 0 2
_

_
(657)
The eigenvalues and eigenfunctions become
=
_

1
0 0 0
0
2
0 0
0 0
3
0
0 0 0
4
_

_
=
_

_
2.4737 20.231i 0 0 0
0 2.4737 + 20.231i 0 0
0 0 0.7763 11.480i 0
0 0 0 0.7763 + 11.480i
_

_
(658)
=
_

(1)

(2)

(3)

(4)

=
_

_
0.00503 + 0.04797i 0.00503 0.04797i 0.00037 0.08192i 0.00037 + 0.08192i
0.00875 0.02658i 0.00875 + 0.02658i 0.00803 0.06908i 0.00803 + 0.06908i
0.98300 0.01700i 0.98300 + 0.01700i 0.94070 + 0.05930i 0.94070 0.05930i
0.55935 0.11133i 0.55935 + 0.11133i 0.78686 + 0.14585i 0.78686 0.14585i
_

_
(659)
Since A = A
T
and B = B
T
the problem is self-adjoint, and
a
= . The diagonal matrices
a and b become, cf. (6-30), (6-31)
18 Chapter 6 LINEAR EIGENVALUE PROBLEMS
a =
T
A =
_

_
0.05786 + 0.14403i 0 0 0
0 0.05786 0.14403i 0 0
0 0 0.04957 + 0.36741i 0
0 0 0 0.04957 0.36741i
_

_
(660)
b =
T
B =
_

_
3.0571 0.81439i 0 0 0
0 3.0571 + 0.81439i 0 0
0 0 4.17941 + 0.85432i 0
0 0 0 4.17941 0.85432i
_

_
(661)
6.2 Characteristic Polynomials
Since the matrix KMis symmetric, it may be Gauss factorized on the form
KM= LDL
T
(662)
where L is a lower triangular matrix with units in the main diagonal, and Dis a diagonal matrix
given as
L =
_

_
1
l
21
1
l
31
l
32
1
.
.
.
.
.
.
.
.
.
.
.
.
l
n1
l
n2
l
n3
1
_

_
(663)
D =
_

_
d
11
0 0
0 d
22
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 d
nn
_

_
(664)
Since det(L) = det(L
T
) = 1, the following representation of the characteristic polynomial
(6-6) is obtained
6.2 Characteristic Polynomials 19
P() = det
_
LDL
T
_
= det
_
L
_
det
_
D
_
det
_
L
T
_
= det
_
D
_
= d
11
d
22
d
nn
(665)
At the same time (6-7) can be written on the form
P() = a
0
_

1
__

2
_

_

n
_
(666)
Despite the striking similarity between (6-65) and (6-66), d
ii
is very different from the corre-
sponding factor
_

i
_
in (6-66), as demonstrated in Example 1.6 below.
Let be varying in the interval ]0, [. From (6-7) follows that P(0) = a
n
= det(K) 0.
Since no factor in (6-66) is changing sign for ]0,
1
[, it follows that P() > 0 in this inter-
val. For ]
1
,
2
[ the factor
_

1
_
has changed its sign, while the remaining factors remain
with unchanged sign. Hence, P() changes sign at the passage of the eigenvalue
1
. Similar
sign changes occur at the passage of the other eigenvalues.
Then, similar sign-changes must take place in (6-65). For ]0,
1
[ all diagonal components
d
ii
are positive. At the passage of
1
, so ]
1
,
2
[, exactly one diagonal component becomes
negative. At the passage of
2
one extra component becomes negative. Generally, if m com-
ponents are negative, it can be decided that is a number placed somewhere in the interval
]
m
,
m+1
[. This property can be used to bound the interval for the eigenvalues as demonstrated
in Example 6.7 below. Actually, the principle can be used to calculate say the jth eigenvalue

j
with arbitrary accuracy. The method is simply to make an initial sweep of calculations of
P() until j components in the main diagonal of D are negative. Successively, we can then
perform additional calculations to reduce the interval, where the jth sign change take place.
This procedure of calculating eigenvalues is known as the telescope method.
20 Chapter 6 LINEAR EIGENVALUE PROBLEMS
Box 6.2: Gauss factorization of symmetric matrix
Gauss factorization reduces a symmetric matrix Kof dimension n n to an upper trian-
gular matrix S in a sequence of n1 matrix multiplications. After the rst (i 1) matrix
multiplications the following matrix is considered
K
(i)
= L
1
i1
L
1
i2
L
1
1
K , i = 2, . . . , n (667)
where K
(1)
= K. Sequentially, the indicated matrix multiplications produce zeros below
the main diagonal of the columns j = 1, . . . , i 1. Then, pre-multiplication of K
(i)
with
L
1
i
will produce zeroes below the main diagonal of the ith column without affecting the
zeroes in the previous columns. L
1
i
is a lower triangular matrix with units in the principal
diagonal, and where only the ith column is non-zero, given as
L
1
i
=
_

_
1
0 1
.
.
.
.
.
.
.
.
.
0 0 1
0 0 0 1
0 0 0 l
i+1,i
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 l
n,i
0 1
_

_
(668)
The components l
j,i
entering the ith column are given as
l
j,i
=
K
(i)
j,i
K
(i)
i,i
, j = i + 1, . . . , n (669)
where K
(i)
j,i
denotes the component in the jth row and ith column of K
(i)
. By insertion it
is proved that the inverse of (6-68) is given as
L
i
=
_

_
1
0 1
.
.
.
.
.
.
.
.
.
0 0 1
0 0 0 1
0 0 0 l
i+1,i
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 l
n,i
0 1
_

_
(670)
6.2 Characteristic Polynomials 21
Then, K
(n)
obtained after the (n1)th multiplication with L
1
n1
, has zeroes in all the rst
(n1) columns below the main diagonal, corresponding to an upper triangular matrix S.
Hence
L
1
n1
L
1
n2
L
1
1
K = S
K = LS , L = L
1
L
2
L
n1
(671)
Since, L dened by (6-71) is the product of lower triangular matrices with 1 in the main
diagonal, it becomes a matrix with the same structure as indicated by (6-63).
Because Kis symmetric, S must have the structure
S = DL
T
(672)
where Dis a diagonal matrix, given by (6-64). This proofs the validity of the factorization
(6-62).
Example 6.5: Gauss factorization of three-dimensional matrix
Given the symmetric matrix
K = K
(1)
=
_

_
5 4 1
4 6 4
1 4 6
_

_ (673)
L
1
1
=
_

_
1 0 0
4
5
1 0

1
5
0 1
_

_
_

_
K
(2)
= L
1
1
K
(1)
=
_

_
5 4 1
0 2.8 3.2
0 3.2 5.8
_

_
L
(1)
= L
1
=
_

_
1 0 0

4
5
1 0
1
5
0 1
_

_
(674)
L
1
2
=
_

_
1 0 0
0 1 0
0
3.2
2.8
1
_

_
_

_
K
(3)
= L
1
2
K
(2)
= S =
_

_
5 4 1
0 2.8 3.2
0 0 2.1429
_

_
L
(2)
= L
1
L
2
= L =
_

_
1 0 0
0.8 1 0
0.2 1.1429 1
_

_
(675)
From this follows that
22 Chapter 6 LINEAR EIGENVALUE PROBLEMS
L =
_

_
1 0 0
0.8 1 0
0.2 1.1429 1
_

_ , D =
_

_
5 0 0
0 2.8 0
0 0 2.1429
_

_ (676)
Example 6.6: Gauss factorization of three-dimensional generalized eigenvalue problem
Of course for given value of the matrix KMmay be factorized numerically according to method explained
in Box 6.1. However for smaller problems explicit expressions may be derived analytically as demonstrated in
the following. Given the mass- and stiffness matrices dened in Example 6.2, the components of L and D are
calculated from the following identities, cf. (6-48), (6-62)
_

_
1 0 0
l
21
l 0
l
31
l
32
1
_

_
_

_
d
11
0 0
0 d
22
0
0 0 d
33
_

_
_

_
1 l
21
l
31
0 1 l
32
0 0 1
_

_ =
_

_
d
11
d
11
l
21
d
11
l
31
d
11
l
21
d
22
+ d
11
l
2
21
d
11
l
21
l
31
+ d
22
l
32
d
11
l
31
d
11
l
21
l
31
+ d
22
l
32
d
33
+ d
11
l
2
31
+ d
22
l
2
32
_

_ =
_

_
2
1
2
1 0
1 4 1
0 1 2
1
2
_

_ (677)
Equating the corresponding components on the left and right hand sides, provides the following equations for the
determination of the unknown quantities, which are solved sequentially
d
11
= 2
1
2
=
1
2
_
4
_
d
11
l
21
= 1 l
21
=
2
4
d
22
+ d
11
l
2
21
= 4 d
22
= 4 +
1
2
_
4
_
4
_
4
_
2
=

2
8 + 14
4
d
11
l
21
l
31
+ d
22
l
32
= d
22
l
32
= 1 l
32
=
4

2
8 + 14
d
33
+ d
11
l
2
31
+ d
22
l
2
32
= d
33
l
32
= 2
1
2

d
33
= 2
1
2
+
4

2
8 + 14
=
1
2

3
12
2
+ 44 48

2
8 + 14
=
1
2
_
2
__
4
__
6
_

2
8 + 14
_

_
(678)
Then, the following expression for the characteristic equation is obtained, in agreement with (6-49)
P() = d
11
d
22
d
33
=
1
2
_
4
_
_

2
8 + 14
4
__

1
2
_
2
__
4
__
6
_

2
8 + 14
_
=
1
4
_
2
__
4
__
6
_
(679)
6.2 Characteristic Polynomials 23
Example 6.7: Bounds on eigenvalues
In this example bounds on the eigenvalues of the GEVP in Example 6.2 is constructed by a check of the sign of
the diagonal components of the matrix D, using the observation at the bottom of p. 13.
For = 1 we get:
KM=
_

_
3
2
1 0
1 3 1
0 1
3
2
_

_ LDL
T
=
_

_
1

2
3
1
0
3
7
1
_

_
_

_
3
2
0 0
0
7
3
0
0 0
15
14
_

_
_

_
1
2
3
0
1
3
7
1
_

_ (680)
The components of the matrices L and D may be calculated by the formulas indicated in Example 6.6. As seen
d
11
=
3
2
> 0, d
22
=
7
3
> 0, d
33
=
15
14
> 0. Hence all three diagonal components are positive, from which it is
concluded that
1
> = 1.
For = 8 we get:
KM=
_

_
2 1 0
1 4 1
0 1 2
_

_ LDL
T
=
_

_
1
1
2
1
0
2
7
1
_

_
_

_
2 0 0
0
7
2
0
0 0
12
7
_

_
_

_
1
1
2
0
1
2
7
1
_

_ (681)
As seen d
11
= 2 < 0, d
22
=
7
2
< 0, d
33
=
12
7
< 0. Hence all three diagonal components are negative, from
which it is concluded that
3
< = 8.
For = 5 we get:
KM=
_

1
2
1 0
1 1 1
0 1
1
2
_

_ LDL
T
=
_

_
1
2 1
0 1 1
_

_
_

1
2
0 0
0 1 0
0 0
3
2
_

_
_

_
1 2 0
1 1
1
_

_ (682)
As seen d
11
=
1
2
< 0, d
22
= 1 > 0, d
33
=
3
2
< 0. Hence two diagonal components are negative and one is
positive, from which it is concluded that
2
< = 5 <
3
.
For = 3 we get:
KM=
_

_
1
2
1 0
1 1 1
0 1
1
2
_

_ LDL
T
=
_

_
1
2 1
0 1 1
_

_
_

_
1
2
0 0
0 1 0
0 0
3
2
_

_
_

_
1 2 0
1 1
1
_

_ (683)
As seen d
11
=
1
2
> 0, d
22
= 1 < 0, d
33
=
3
2
< 0. Hence two diagonal components are positive and one is
negative, from which it is concluded that
1
< = 3 <
2
.
In conclusion the following bounds prevail
1 <
1
< 3
3 <
2
< 5
5 <
3
< 8
_

_
(684)
24 Chapter 6 LINEAR EIGENVALUE PROBLEMS
6.3 Eigenvalue Separation Principle
The matrices M
(m)
and K
(m)
of dimension (n m) (n m) are obtained from M and K,
if the last m rows and columns are omitted in these matrices. Then, consider the sequence of
related characteristic polynomials of the order (n m)
P
(m)
_

(m)
_
= det
_
K
(m)

(m)
M
(m)
_
, m = 0, 1, . . . , n 1 (685)
where M
(0)
= M, K
(0)
= K,
(0)
= and P
(0)
() = P(). The eigenvalues corresponding to
M
(m)
and K
(m)
are denoted as
(m)
1
,
(m)
2
, . . . ,
(m)
nm
.
Now, for any m = 0, 1, . . . , n 1 it can be proved that the roots of P
(m+1)
_

(m+1)
_
= 0 are
separating the roots of P
(m)
_

(m)
_
= 0, i.e.
0
(m)
1

(m+1)
1

(m)
2

(m+1)
2

(m)
nm1

(m+1)
nm1

(m)
nm
(686)
A formal proof of (6-86) has been given by Bathe.
2
A sequence of polynomials such as P
(m)
()
with roots fullling the property (6-86), is denoted a Sturm sequence.
Next, consider the Gauss factorization (6-62). Omitting the last m rows and columns in Mand
Kis tantamount to omitting the last m rows and columns in L and D. Then
P
(m)
_

(m)
_
= det
_
K
(m)

(m)
M
(m)
_
= det
_
L
(m)
D
(m)
L
(m)
T
_
= det
_
D
(m)
_
= d
11
d
22
d
nm,nm
(687)
where
L
(m)
=
_

_
1
l
21
1
l
31
l
32
1
.
.
.
.
.
.
.
.
.
.
.
.
l
nm,1
l
nm,2
l
nm,3
1
_

_
(688)
D
(m)
=
_

_
d
11
0 0
0 d
22
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 d
nm,nm
_

_
(689)
2
K.-J. Bathe: Finite Element Procedures. Printice Hall, Inc., 1996.
6.3 Eigenvalue Separation Principle 25
The bounding property explained in Section 6.2 for the case m = 0 can then easily be gener-
alized. Let
(m)
= , and perform a Gauss factorization on the matrix K
(m)
M
(m)
. Then
the number of eigenvalues,
(m)
j
< , will be equal to number of negative diagonal components
d
11
, . . . , d
nm,nm
in the matrix D.
The number of negative elements in main diagonal of the matrix Din the Gauss factorization of
KM= LDL
T
, and hence the number of eigenvalues smaller than , can then be retrieved
from the signs of the sequence P
(0)
(), P
(1)
(), . . . , P
(n1)
() as seen in the following way.
Introduce P
(n)
() as an arbitrary positive quantity. Since P
(n1)
() = d
11
, it follows that the
sequence P
(n)
(), P
(n1)
() has the sign sequence sign(P
(n)
()), sign(P
(n1)
()) = +, , if
d
11
< 0, and the sign sequence +, +, if d
11
> 0. d
11
< 0 indicates that at least one eigenvalue
is smaller than , in which case one sign change, namely from + to , has occurred in the indi-
cated sign sequence. Next, P
(n2)
() = d
11
d
22
is considered. d
11
< 0 d
22
< 0 indicates that
two eigenvalues are smaller than . This in turns implies that P
(n1)
() has a negative sign, and
P
(n2)
() has a positive sign. Then, one additional sign change has occurred in the sequence of
sign of the characteristic polynomials sign(P
(n)
()), sign(P
(n1)
()), sign(P
(n2)
())=+, , +.
If d
22
> 0, then P
(n1)
() and P
(n2)
() have the same sign, and no additional sign change is
recorded in the sequence of signs of the characteristic polynomials. Proceeding in this way it is
seen that the number of sign changes in the sequence of signs sign(P
(n)
()), sign(P
(n1)
()), . . . ,
sign(P
(0)
()) determines the total number of eigenvalues smaller than . This property of the
sequence of characteristic polynomials is known as a Sturm sequence check.
Example 6.8: Bounds on eigenvalues by eigenvalue separation principle
For the mass- and stiffness matrices dened in Example 6.2, the matrices M
(1)
and K
(1)
become
M
(1)
=
_
1
2
0
0 1
_
, K
(1)
=
_
2 1
1 4
_
(690)
The characteristic equation (6-6) becomes
det
__
2
1
2

(1)
j
1
1 4
(1)
j
__
= 0
_

(1)
1
= 4

2 = 2.59

(1)
2
= 4 +

2 = 5.41
(691)
The matrices M
(2)
and K
(2)
become
M
(2)
=
_
1
2
_
, K
(2)
=
_
2
_

(2)
1
= 4 (692)
The relation (6-86) becomes

1

(1)
1

2

(1)
1

(2)
1

(1)
2

2

(1)
2

3
_

_
0
1

(1)
1

(1)
1

2

(1)
2

(1)
2

3

_
0
1
2.59
2.59
2
5.41
5.41
3

(693)
The exact solutions are
1
= 2,
2
= 4, and
3
= 6, cf. Example 6.2.
26 Chapter 6 LINEAR EIGENVALUE PROBLEMS
Example 6.9: Physical interpretation of eigenvalue separation principle
F F
x
j
l
1 2

n 1 n
u
1
u
2
u(x, t)
u
j
u
j+1
u
n2
u
n1

(0)
1

(1)
1

(1)
2

(0)
2
a)
b)
Fig. 62 Vibrating string. a) Denitions. b) Undamped eigenmodes.
Fig. 6-2a shows a vibrating string with the pre-stress force F, and the mass per unit length . The string has been
divided into n identical elements, each of the length l. Hence, the total length of the string is l = nl. Vibra-
tions u(x, t) of the string in the transverse direction is given by the wave equation with homogeneous boundary
conditions

2
u
t
2
F

2
u
x
2
= 0 , x ]0, l[
u(0, t) = u(l, t) = 0
_

_
(694)
where x is measured from the left support point. The spatial operator in (6-94) is discretized by means of a central
difference operator, i.e.
F

2
u
x
2

F
l
2
_
u
i+1
2u
i
+ u
i1
_
, i = 1, . . . , n 1 (695)
where u
i
(t) = u(il, t). The boundary conditions imply that u
0
(t) = u
n
(t) = 0. Then, the discretized wave
equation may be represented by the matrix differential equation
M
(0)
u +K
(0)
u = 0 (696)
u(t) =
_

_
u
1
(t)
u
2
(t)
.
.
.
u
n2
(t)
u
n1
(t)
_

_
, M
(0)
=
_

_
1 0 0 0 0 0
0 1 0 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0 1 0
0 0 0 0 0 1
_

_
, K
(0)
=
F
l
2
_

_
2 1 0 0 0 0
1 2 1 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 1 2 1
0 0 0 0 1 2
_

_
(697)
The undamped circular eigenfrequencies of the continuous system are given as
6.4 Shift 27

(0)
j
=
j
l

(698)
The corresponding eigenmodes are sine functions, shown with unbroken signature on Fig. 6-2b. The wavelength
of a half-sine is equal to l/j. As seen from (6-98) the eigenfrequency increases, if the wave-length of the half-sine
decreases.
Next, consider the system dened by the matrices M
(1)
and K
(1)
of dimension (n 2) (n 2), where the
last row and column are omitted in M(0) and K(0). Physically, this corresponds to constraining the displacement
u
n1
(t) = 0, as indicated by the additional support in Fig. 6-2b. The corresponding eigenmodes of the continuous
system have been shown with a dashed signature. As seen in Fig. 6-2b the wave-lengths related to the circular
eigenfrequencies
(0)
1
,
(1)
1
,
(0)
2
and
(1)
2
decreases in the indicated order. Hence, the following ordering of these
eigenfrequencies prevails

(0)
1
<
(1)
1
<
(0)
2
<
(1)
2
(699)
Since
(m)
j
=
_

(m)
j
_
2
, the corresponding ordering of the eigenvalues become

(0)
1
<
(1)
1
<
(0)
2
<
(1)
2
(6100)
which corresponds to (6-86).
6.4 Shift
Occasionally, a shift on the stiffness matrix may be used to enhance the speed of calculation
of the considered GEVP. In order to explain this the eigenvalue problem (6-5) is written in the
following way
_
KM+ M
j
M
_

(j)
= 0 (6101)
Obviously, we have withdrawn and added the quantity M inside the bracket, where is a
suitable real number, which will not affect neither the eigenvalues
j
, nor the eigenvectors
(j)
.
(6-94) is rearranged on the form
_

K
j
M
_

(j)
= 0 (6102)
where

K = KM ,
j
=
j
(6103)
Hence, instead of the original generalized eigenvalue problem dened by the matrices
_
K, M
_
,
the system with the matrices
_

K, M
_
is considered in the shifted system, where

Kis calculated
28 Chapter 6 LINEAR EIGENVALUE PROBLEMS
as indicated in (6-103). The two systems have identical eigenvectors. However, the eigenvalues
of the shifted system become (
1
), (
2
), . . . , (
n
), where
1
,
2
, . . . ,
n
denote the
eigenvalues of the original system.
For non-supported systems (ships and aeroplanes) a stiff-body motion = 0 exists, which
fullls
K = 0 (6104)
(6-104) shows that = 0 is an eigenvalue for such systems. Correspondingly, det(K) = 0 for
systems, which possesses a stiff-body motion. However, some numerical algorithms presume
that det(K) = 0. In such cases a preliminary shift on the stiffness matrix must be performed,
because det(KM) = 0, if det(K) = 0.
Example 6.10: Shift on stiffness matrix
Given the mass- and stiffness matrices
M=
_
2 1
1 2
_
, K =
_
3 3
3 3
_
(6105)
The characteristic equation (6-6) becomes
det
__
3 2 3
3 3 2
__
= 0
_

1
= 0

2
= 6
(6106)

1
= 0 since det(K) = 0.
Next, a shift on the stiffness matrix with = 2 is performed, which provides

K =
_
3 3
3 3
_
+ 2
_
2 1
1 2
_
=
_
7 1
1 7
_
(6107)
Now, the characteristic equation becomes
det
__
7 2 1
1 7 2
__
= 0
_

1
= 2

2
= 8
(6108)
6.5 Transformation of GEVP to SEVP 29
6.5 Transformation of GEVP to SEVP
Some eigenvalue solvers are written for the special eigenvalue problem. Hence, their use pre-
sumes an initial transformation of the generalized eigenvalue problem (6-5). Of course, this
may be performed, simply by a pre-multiplication of (6-5) with M
1
. However, then the result-
ing system matrix M
1
Kis no longer symmetric. In this section a similarity transformation is
indicated, which preserves the symmetry of the system matrix.
Since, M= M
T
it can be factorized on the form
M= SS
T
(6109)
The generalized eigenvalue problem (6-5) may then be written in the form
K
_
S
T
_
1
S
T

(j)
=
j
SS
T

(j)

S
1
K
_
S
1
_
T
S
T

(j)
=
j
S
T

(j)
(6110)
where the identity
_
S
T
_
1
=
_
S
1
_
T
has been used. (6-110) can then be formulated in terms
of the following standard EVP

(j)
=
j

(j)
(6111)
where

K = S
1
K
_
S
1
_
T
(6112)

(j)
= S
T

(j)
(6113)
(6-112) denes a similarity transformation with the transformation matrix S
1
, which diagolizes
the mass matrix. Obviously,

K =

K
T
. As seen from (6-111) the eigenvalues
1
, . . . ,
n
are
identical for the original and the transformed eigenvalue problem, whereas the eigenvectors
(j)
and

(j)
are related by the transformation (6-113).
The determination of a matrix S fullling (6-109) is not unique. Actually, innite many solu-
tions to this problem exist. Below, two approaches have been given. In both cases it is assumed
that M= M
T
is positive denite.
Generally, Choleski decomposition is considered the most effective way of solving the problem.
In this case a lower triangular matrix exist S, so (6-109) is fullled. Obviously, S is related to
the Gauss factorization as follows
30 Chapter 6 LINEAR EIGENVALUE PROBLEMS
S = LD
1
2
, D
1
2
=
_

d
11
0 0
0

d
22
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0

d
nn
_

_
(6114)
The diagonal matrix D
1
2
does only exist, if the components d
ii
of the matrix Dare all positive.
This is indeed the case, if Mis positive denite. Although, S may be calculated from (6-114),
there exists a faster and more direct algorithm for the determination of this quantity.
Alternatively, a so-called spectral decomposition of M may be used. The basis of this method
is the following SEVP for M
Mv
(j)
=
j
v
(j)
(6115)

j
and v
(j)
denotes the jth eigenvalue and eigenvector of M. Both are real, since M is sym-
metric. The eigenvalue problems (6-115) can be assembled into the matrix formulation, cf.
(6-10)
M = VR (6116)
=
_

1
0 0
0
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
n
_

_
, V = [v
(1)
v
(2)
v
(n)
] (6117)
The eigenvectors are normalized to magnitude 1, i.e. v
(i) T
v
(j)
=
ij
. Then, the modal matrix
V fullls, cf. (6-19)
V
1
= V
T
(6118)
From (6-116) and (6-118) the following representation of Mis obtained
M= VV
T
(6119)
Finally, from (6-109) and (6-119) the following solution for S is obtained
S = V
1
2
,
1
2
=
_

1
0 0
0

2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0

n
_

_
(6120)
6.5 Transformation of GEVP to SEVP 31
The drawback of the spectral approach is that an initial SEVP must be solved, before the trans-
formed eigenvalue problem (6-111) can be analyzed.
Box 6.3: Choleski decomposition of symmetric positive denite matrix
Choleski decomposition factorizes a symmetric positive denite matrix Minto the matrix
product of a lower triangular matrix S and its transpose, as follows
M= SS
T

_
m
11
m
21
m
n1
m
21
m
22
m
n2
.
.
.
.
.
.
.
.
.
.
.
.
m
n1
m
n2
m
nn
_

_
=
_

_
s
11
0 0
s
21
s
22
0
.
.
.
.
.
.
.
.
.
.
.
.
s
n1
s
n2
s
nn
_

_
_

_
s
11
s
21
s
n1
0 s
22
s
n2
.
.
.
.
.
.
.
.
.
.
.
.
0 0 s
nn
_

_
=
_

_
s
2
11
s
21
s
11
s
2
22
+ s
2
21
symmetric
.
.
.
.
.
.
.
.
.
s
n1
s
11
s
n2
s
22
+ s
n1
s
21
s
2
nn
+ s
2
n,n1
+ + s
2
n2
+ s
2
n1
_

_
(6121)
Equating the components of the nal matrix product with the component on and below
the main diagonal of M equations can be formulated for the determinations of s
ij
, which
are solved sequentially. First s
11
=

m
11
is calculated. Next, s
i1
, i = 2, . . . , n are
determined from s
i1
= m
i1
/s
11
. Next, s
22
=
_
m
22
s
2
21
is calculated, and s
i2
, i =
3, . . . , n can be determined from s
i2
= (m
2i
s
i1
s
21
)/s
22
. Next, the 3th column can be
calculated and so forth. The general algorithm for calculating the components s
ij
in the
jth column reads
s
jj
=
_
m
jj
s
2
j,j1
s
2
j1
, j = 1, . . . , n
s
ij
= (m
ij
s
i,j1
s
j,j1
s
i1
s
j1
)/s
jj
, i = j + 1, . . . , n
(6122)
32 Chapter 6 LINEAR EIGENVALUE PROBLEMS
6.6 Exercises
6.1 Given the following mass- and stiffness matrices
M=
_

_
1 0 0
0 2 0
0 0
1
2
_

_
, K =
_

_
2 1 0
1 2 0
0 0 3
_

_
(a.) Calculate the eigenvalues and eigenmodes normalized to unit modal mass.
(b.) Determine two vectors that are M-orthonormal without being eigenmodes.
(c.) Show that the eigenvalue separation principle is valid for the considered example.
6.2 Given the following mass- and stiffness matrices
M=
_
2 0
0 0
_
, K =
_
6 1
1 4
_
(a.) Calculate the eigenvalues and eigenmodes normalized to unit modal mass.
(b.) Perform a shift = 3 on Kand calculate the eigenvalues and eigenmodes of the new
problem.
6.3 The eigensolutions with eigenmodes normalized to unit modal mass of a 2-dimensional
generalized eigenvalue probem are given as
=
_

1
0
0
2
_
=
_
1 0
0 4
_
, =
_

(1)

(2)

=
_
2
2

2
2

2
2

2
2
_
(a.) Calculate M and K.
6.4 Given a symmetric matrix K.
(a.) Write a MATLAB program, which determines the matrices L and D of a Gauss fac-
torization as well as the matrix (S
1
)
T
, where S is a lower triangular matrix fullling
SS
T
= K.
6.5 Given a symmetric positive denite matrix K.
(a.) Write a MATLAB program, which performs Choleski decomposition.
CHAPTER 7
APPROXIMATE SOLUTION METHODS
This chapter deals with various approximate solution methods for solving the generalized eigen-
value problem.
Section 7.1 consider the application of static condensation or Guyan reduction.
1
The idea of
the method is to reduce the magnitude of the generalized eigenvalue problem from n to n
1
n
degrees of freedom. Next, the reduced system is solved exact. In principle no approximation is
related to the procedure.
Section 7.2 deals with the application of Rayleigh-Ritz analysis. Similar to static condensation
this is a kind of system reduction procedure. As shown the method can be given a formulation
identical to static condensation. However, exact results are no longer obtained.
Section 7.3 deals with the bounding of the error related to a certain approximate eigenvalue.
7.1 Static Condensation
The basic assumption of static condensation is that inertia is conned to the rst n
1
degrees of
freedom, whereas inertia effects are ignored for the remaining n
2
= nn
1
degrees of freedom.
The approximation of the method stems from the ignorance of these inertial couplings. This
corresponds to the following partitioning of the mass and stiffness matrices
M=
_
M
11
0
0 0
_
, K =
_
K
11
K
12
K
21
K
22
_
(71)
M
11
and K
11
are sub-matrices of dimension n
1
n
1
, K
12
= K
T
21
is a sub-matrix of dimension
n
1
n
2
, and K
22
is of the dimension n
2
n
2
. The eigenvalue problems for the rst n
1
and
the last n
2
eigenvectors can be assembled in the following partitioned matrix formulations, cf.
(6-10)
1
S.R.K. Nielsen: Vibration Theory, Vol. 1. Linear Vibration Theory. Aalborg tekniske Universitetsforlag, 1998.
33
34 Chapter 7 APPROXIMATE SOLUTION METHODS
_
K
11
K
12
K
21
K
22
__

11

21
_
=
_
M
11
0
0 0
__

11

21
_

1
_
K
11
K
12
K
21
K
22
__

12

22
_
=
_
M
11
0
0 0
__

12

22
_

2
_

_
(72)
where
1
and
2
are diagonal matrices of the dimension n
1
n
1
and n
2
n
2

1
=
_

1
0 0
0
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
n
1
_

_
,
2
=
_

n
1
+1
0 0
0
n
1
+2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
n
_

_
(73)

(j)
1
and
(j)
2
denote sub-vectors encompassing the rst n
1
and the last n
2
components of the
jth eigenmode
(j)
. Then, the matrices
11
,
12
,
21
and
22
entering (7-2) are dened as

11
=
_

(1)
1

(2)
1

(n
1
)
1

,
12
=
_

(n
1
+1)
1

(n
1
+2)
1

(n)
1

21
=
_

(1)
2

(2)
2

(n
1
)
2

,
22
=
_

(n
1
+1)
2

(n
1
+2)
2

(n)
2

_
_
_
(74)
At rst the solution for the rst n
1
eigenmodes is considered. From the lower lower half of the
rst matrix equation in (7-2) follows
K
21

11
+K
22

21
= 0

21
= K
1
22
K
21

11
(75)
From the corresponding upper half of the said matrix equation, and (7-5), follows
K
11

11
K
12
K
1
22
K
21

11
= M
11

11

K
11

11
= M
11

11

1
(76)
where

K
11
= K
11
K
12
K
1
22
K
21
(77)
7.1 Static Condensation 35
(7-6) is a generalized eigenvalue problemof reduced dimension n
1
, which is solved for
_

1
,
11
_
.
Next, the remaining components of the rst n
1
eigenmodes are calculated from (7-5). The
modal masses become
m
1
=
_

11

21
_
T
_
M
11
0
0 0
__

11

21
_
=
T
11
M
11

11
(78)
Hence, the total eigenmodes will be normalized to unit modal mass with respect to M, if the
sub-vectors
11
are normalized to unit modal mass with respect to M
11
.
Next, the solution for the last n
2
eigenmodes are considered. From the last matrix equation in
(7-2) follows
_
K
11
K
12
K
21
K
22
__

12

22
_

1
2
=
_
M
11

12
0
_
(79)
Obviously, (7-9) is fullled for
1
2
= 0
12
= 0.
1
2
= 0 implies that all n
2
eigenvalues
are equal to innity. Hence, the following eigensolutions are obtained

2
=
_

_
0 0
0 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
_

_
,
_

12

22
_
=
_
0

22
_
(710)
The matrix
22
is undetermined. Any matrix with linear independent column vectors will do.
Then, this quadratic matrix may simply be chosen as an n
2
n
2
unit matrix

22
= I (711)
The modal masses become
m
2
=
_
0

22
_
T
_
M
11
0
0 0
__
0

22
_
= 0 (712)
Generally, if the ith rowand the ith column in Mare equal to zero, then
T
= [0, . . . , 0, 1, 0, . . . , 0]
is an eigenvector with the eigenvalue = . The modal mass is 0.
In praxis the calculation of

K
11
is solved by means of an initial Choleski decomposition of K
22
,
cf. Box 6.3
36 Chapter 7 APPROXIMATE SOLUTION METHODS
K
22
= SS
T
K
1
22
=
_
S
1
_
T
S
1
(713)
where both S and S
1
are lower triangular matrices. Then,

K
11
is determined from

K
11
= K
11
R
T
R (714)
where the n
2
n
2
matrix Ris obtained as solution to the matrix equation
SR = K
21
(715)
In principle (7-15) represent n
2
linear equations with n
2
right-hand sides. Given that S is a
lower triangular matrix, this is relatively easily solved.
Finally, it should be noticed that the static condensation approach is only of value if n
1
n.
Example 7.1: Static condensation
Given the following mass- and stiffness matrices
M=
_

_
0 0 0 0
0 2 0 0
0 0 0 0
0 0 0 1
_

_
, K =
_

_
2 1 0 0
1 2 1 0
0 1 2 1
0 0 1 1
_

_
(716)
The rows and columns are interchanged the mass and stiffness matrices, so the following eigenvalue problems are
obtained
_

_
2 0 1 1
0 1 1 0
1 1 2 0
1 0 0 2
_

_
_

11

21
_

_ =
_

_
2 0 0 0
0 1 0 0
0 0 0 0
0 0 0 0
_

_
_

11

21
_

_
_

1
0
0
2
_

_
_

_
2 0 1 1
0 1 1 0
1 1 2 0
1 0 0 2
_

_
_

12

22
_

_ =
_

_
2 0 0 0
0 1 0 0
0 0 0 0
0 0 0 0
_

_
_

12

22
_

_
_

3
0
0
4
_

_
_

_
(717)
Then
K
11
=
_
2 0
0 1
_
, K
12
= K
21
=
_
1 1
1 0
_
, K
22
=
_
2 0
0 2
_
, M
11
=
_
2 0
0 1
_
(718)
7.1 Static Condensation 37
From (7-7) follows

K
11
= K
11
K
12
K
1
22
K
21
=
_
2 0
0 1
_

_
1 1
1 0
__
2 0
0 2
_
1
_
1 1
1 0
_
=
_
1
1
2

1
2
1
2
_
(719)
The reduced eigenvalue problem (7-6) becomes
_
1
1
2

1
2
1
2
_

11
=
_
2 0
0 1
_

11

1
(720)
The eigensolutions with eigenmodes normalized to modal mass 1 become

1
=
_

1
0
0
2
_
=
_
1
2

2
4
0
0
1
2
+

2
4
_
,
11
=
_
1
2

1
2

2
2

2
2
_
(721)
From (7-5) follows

21
=
_
2 0
0 2
_
1
_
1 1
1 0
__
1
2

1
2

2
2

2
2
_
=
_
1
4
+

2
4

1
4
+

2
4
1
4

1
4
_
(722)
From (7-10) and (7-11) follows

2
=
_

3
0
0
4
_
=
_
0
0
_
,
12
=
_
0 0
0 0
_
,
22
=
_
1 0
0 1
_
(723)
After interchanging the degrees of freedom back to the original order (the 1st and 2nd components of
11
and
12
are placed as the 2nd and 4th component of
(j)
, the 1st and 2nd components of
21
and
22
are placed as the
3rd and 1st component
(j)
), the following eigensolution is obtained
=
_

1
0 0 0
0
2
0 0
0 0
3
0
0 0 0
4
_

_
=
_

_
1
2

2
4
0 0 0
0
1
2
+

2
4
0 0
0 0 0
0 0 0
_

_
=
_

(1)

(2)

(3)

(4)
_
=
_

_
1
4

1
4
1 0
1
2

1
2
0 0
1
4
+

2
4

1
4
+

2
4
0 1

2
2

2
2
0 0
_

_
_

_
(724)
38 Chapter 7 APPROXIMATE SOLUTION METHODS
7.2 Rayleigh-Ritz Analysis
Consider the generalized eigenvalue problem (6-5). If M is positive denite, so v
T
Mv > 0 for
any v = 0. Then, the so-called Rayleigh quotient may be dened as
(v) =
v
T
Kv
v
T
Mv
(725)
It can be proved that (v) fullls the bounding, see Box 7.1

1
(v)
n
(726)
where
1
and
n
denote the smallest and the largest eigenvalues of the generalized eigenvalue
problem.
Especially, if v =
(1)
, where
(1)
has been normalized to unit modal mass, it follows that

(1) T
M
(1)
= 1 and
(1) T
K
(1)
=
1
, see (7-34) and (7-35) below. Then, (v) =
1
. This
property is contained in the so-called Rayleighs principle

1
= min
vR
n
(v) (727)
Next, assume that v is M-orthogonal to
(1)
, so
(1) T
Mv = 0. Then, the following bounding
of the Rayleigh quotient may be proved, see Box 7.1

2
(v)
n
(728)
Correspondingly,
2
may be evaluated by the following extension of the Rayleigh principle,
where the M-orthogonality of the test vector v to the rst eigenmode
(1)
has been included as
a restriction

2
=
_
_
_
min
vR
n
(v)

(1) T
Mv = 0
(729)
The corresponding optimal vector will be v =
(2)
.
Generally, if v is M-orthogonal to the rst m 1 eigenmodes
(1)
,
(2)
, . . . ,
(m1)
, so

(j) T
Mv = 0 , j = 1, . . . , m 1, the following bounding of the Rayleigh quotient may
be proved, see Box 7.1

m
(v)
n
, m < n (730)
7.2 Rayleigh-Ritz Analysis 39
Correspondingly,
m
may be evaluated by the following extension of the Rayleigh variational
principle, where restriction of M-orthogonal of the test vector v to the eigenmodes
(j)
=
0 , j = 1, . . . , m1 are included

m
=
_
_
_
min
vR
n
(v)

(j) T
Mv = 0 , j = 1, . . . , m1
(731)
The corresponding optimal vector will be v =
(m)
.
The Rayleigh quotient may be used to calculate an upper bound for the lowest eigenvalue
1
.
The quality of the estimate depends on the choice of v. The better the qualitative and quantita-
tive resemblance of v to the shape of the lowest eigenmode, the sharper will be the calculated
upper bound.
Box 7.1: Proof of boundings of the Rayleigh quotient
Given the linear independent eigenmodes, normalized to unit modal mass

(1)
,
(2)
, . . . ,
(n)
. Using the eigenmodes as a vector basis, any n-dimensional
vector may be written as
v = q
1

(1)
+ q
2

(2)
+ + q
n

(n)
(732)
Insertion of (7-32) into (7-25) provides
(v) =
n

i=1
n

j=1
q
i
q
j

(i) T
K
(j)
n

i=1
n

j=1
q
i
q
j

(i) T
M
(j)
=
q
2
1

1
+ q
2
2

2
+ + q
2
n

n
q
2
1
+ q
2
2
+ + q
2
n
(733)
where the orthonormality conditions of the eigenmodes have been used in the last state-
ment, i.e.

(i) T
M
(j)
=
_
0 , i = j
1 , i = j
(734)

(i) T
K
(j)
=
_
0 , i = j

i
, i = j
(735)
40 Chapter 7 APPROXIMATE SOLUTION METHODS
Given the following ordering of the eigenvalues
0
1

2

n1

n
(736)
it follows directly from (7-33) that
(v)
q
2
1

1
+ q
2
2

1
+ + q
2
n

1
q
2
1
+ q
2
2
+ + q
2
n
=
1
(v)
q
2
1

n
+ q
2
2

n
+ + q
2
n

n
q
2
1
+ q
2
2
+ + q
2
n
=
n
_

_
(737)
which proves the bounding (7-26).
(7-32) is pre-multiplied with
(j) T
M. Then, use of (7-34) provides the following expres-
sion for the jth modal coordinate
q
j
=
(j) T
Mv (738)
Hence, if v is M-orthogonal to
(j)
, j = 1, . . . , m 1 it follows that q
1
= q
2
= =
q
m1
= 0. In this case (7-33) attains the form
(v) =
q
2
m

m
+ q
2
m+1

m+1
+ + q
2
n

n
q
2
m
+ q
2
m+1
+ + q
2
n
(739)
Proceeding as in (7-37) it then follows that
(v)
q
2
m

m
+ q
2
m+1

m
+ + q
2
n

m
q
2
m
+ q
2
m+1
+ + q
2
n
=
m
(v)
q
2
m

n
+ q
2
m+1

n
+ + q
2
n

n
q
2
m
+ q
2
m+1
+ + q
2
n
=
n
_

_
(740)
which proves the bounding (7-30).
The so-called Ritz analysis m linearly independent base vectors,
(1)
, . . . ,
(m)
, are dened,
which span an m-dimensional subspace V
m
V
n
. Often the base vectors are determined as
the static deections from m linearly independent load vectors f
1
, . . . , f
m
. This is preferred, be-
cause it often is simpler to specify static load, which will produce displacements qualitatively in
agreement with the eigenmodes to be determined by the analysis. The Ritz-basis is determined
from the equilibrium equation
7.2 Rayleigh-Ritz Analysis 41
K = f = K
1
f (741)
=
_

(1)

(2)

(m)

, f = [f
1
f
2
f
m
] (742)
Then, any vector v V
m
can be written on the form
v = q
1

(1)
+ q
2

(2)
+ + q
m

(m)
=
_

(1)

(2)

(m)

_
q
1
q
2
.
.
.
q
m
_

_
= q , q =
_

_
q
1
q
2
.
.
.
q
m
_

_
(743)
The idea in Ritz analysis is to insert (7-43) into the Rayleigh quotient (7-25), and determine
the modal coordinates q
1
, q
2
, . . . , q
m
, which minimizes this quantity. Hence, the following re-
formulation of the Rayleigh quotient is considered
(q) =
_
q
_
T
Kq
_
q
_
T
Mq
=
q
T

Kq
q
T
Mq
(744)
where

M=
T
M = [

M
ij
] ,

M
ij
=
(i) T
M
(j)

K =
T
K = [

K
ij
] ,

K
ij
=
(i) T
K
(j)
_
(745)

Mand

Kare denoted as the projected mass- and stiffness matrices on the subspace spanned by
the Ritz basis .
The approximation to
1
then follows from (7-27)

1

1
= min
qV
m
(q) = min
q
1
,...,q
m
m

i=1
m

j=1
q
i

K
ij
q
j
m

i=1
m

j=1
q
i

M
ij
q
j
(746)
Generally,
1
is larger than
1
in agreement with (7-26), Only if
(1)
V
m
in which case modal
coordinates q
,
. . . , q
m
exist so
(1)
= q
1

(1)
+ + q
m

(m)
, will
1
=
1
.
The necessary condition for a minimum is that
42 Chapter 7 APPROXIMATE SOLUTION METHODS

q
i
_
q
T

Kq
q
T
Mq
_
= 0 , i = 1, . . . , m
q
T

Mq

q
i
_
q
T

Kq
_
q
T

Kq

q
i
_
q
T

Mq
_
_
q
T
Mq
_
2
= 0

q
i
_
q
T

Kq
_

q
i
_
q
T

Mq
_
= 0 (747)
Now,

q
i
_
q
T

Kq
_
=

q
i

m
j=1

m
k=1
q
j

K
jk
q
k
= 2

m
k=1

K
ik
q
k
, where the symmetry property,

K
jk
=

K
kj
(

K =

K
T
), has been applied. Similarly,

q
i
_
q
T

Mq
_
= 2

m
k=1

M
ik
q
k
. Then, the
minimum condition (7-47) reduces to
m

j=1

K
ij
q
j

1
m

j=1

M
ij
q
j
= 0 (748)
From (7-48) follows that
1
is determined as the lowest eigenvalue to the following generalized
eigenvalue problem of dimension m

Kq

Mq = 0 (749)
(7-49) has m eigensolutions (
i
, q
(i)
), i = 1, . . . , m.
i
becomes an approximation to the ith
eigenvalue
i
. The corresponding approximation to the ith eigenmode is calculated from

(i)
= q
1,i

(1)
+ + q
m,i

(m)
= q
(i)
, i = 1, . . . , m (750)
where q
1,i
, . . . , q
1m,i
denote the components of q
(i)
.
The relations (7-50) can be assembled into the matrix equation

= Q (751)

=
_

(1)

(2)

(m)

, Q =
_
q
(1)
q
(2)
q
(m)

(752)
We shall assume that the eigenvectors q
(i)
are normalized to unit modal mass with respect to
the projected mass matrix, i.e. the following orthonormality properties are fullled
q
(i) T

Mq
(j)
=
_
0 , i = j
1 , i = j
(753)
7.2 Rayleigh-Ritz Analysis 43
q
(i) T

Kq
(j)
=
_
0 , i = j

i
, i = j
(754)
Then, the modal mass of the eigenmodes

become

T
M

= (Q)
T
MQ = Q
T

MQ = I (755)
Hence, the approximate eigenmodes

(i)
will be normalized to unit modal mass, if this is the
case for the eigenvectors q
(i)
with respect to the projected mass matrix.

forms an alternative
Ritz-basis in R
m
, which in addition is M-orthonormal. Similarly, the approximate eigenmodes
are K-orthogonal as follows

T
K

= (Q)
T
KQ = Q
T

KQ = R (756)
where R is m-dimensional diagonal matrix with the eigenvalues
1
, . . . ,
m
in the main diago-
nal.
Obviously, the Rayleigh quotient approach corresponds to m = 1. Hence, Ritz analysis is
merely a multi-dimensional generalization, for which reason the name Rayleigh-Ritz analysis
has been coined for the method.
As a generalization to (7-26) the following boundings can be proved
2

1

1
,
2

2
, . . . ,
m

m

n
(757)
2
K.-J. Bathe: Finite Element Procedures. Printice Hall, Inc., 1996.
44 Chapter 7 APPROXIMATE SOLUTION METHODS
Box 7.2: Rayleigh-Ritz algorithm
1. Estimate m linearly independent static load vectors f
1
, . . . , f
m
, assembled column-
wise in the n m matrix f = [f
1
f
2
f
m
].
2. Calculate the Ritz basis from = K
1
f , =
_

(1)

(1)

(m)

.
3. Calculate projected mass and stiffness matrices in the m-dimensional subspace
spanned by the Ritz basis:

M=
T
M,

K =
T
K.
4. Solve the generalized eigenvalue problem of dimension m:

KQ =

MQR.
5. Determine approximations to the lowest m eigenvector from the transformation

=
Q ,

=
_

(1)

(2)

(m)

. The corresponding approximate eigenvalues are


contained in the main diagonal of R.
Returning to the static condensation problem in Section 7.1, let us dene a Ritz basis of the
dimension m = n
1
as

1
=
_
_
I
K
1
22
K
21
_
_
(758)
where I is a unit matrix of dimension n
1
n
1
. Given the structure of the mass and stiffness
matrices in (7-1), we may then evaluate the following projected matrices

M=
T
1
M
1
=
_
_
I
K
1
22
K
21
_
_
T
_
_
M
11
0
0 0
_
_
_
_
I
K
1
22
K
21
_
_
= M
11
(759)

K =
T
1
K
1
=
_
_
I
K
1
22
K
21
_
_
T
_
_
K
11
K
12
K
21
K
22
_
_
_
_
I
K
1
22
K
21
_
_
= K
11
K
12
K
1
22
K
21
=

K
11
(760)
Hence, (7-49) reduce to the generalized eigenvalue problem (7-6), with Q =
11
, and R =
1
.
Consequently, static condensation may be interpreted as merely a Rayleigh-Ritz analysis with
the Ritz basis (7-58).
The following identity may be proved by insertion
_
_
K
11
K
12
K
21
K
22
_
_
_
_
I
K
1
22
K
21
_
_
_
K
11
K
12
K
1
22
K
21
_
1
=
_
_
I
0
_
_
(761)
7.2 Rayleigh-Ritz Analysis 45
Then, we may construct an alternative Ritz basis from (7-41) with the static load given as the
right hand side of (7-61), i.e.

2
= K
1
f =
_
_
K
11
K
12
K
21
K
22
_
_
1
_
_
I
0
_
_
=
_
_
I
K
1
22
K
21
_
_
_
K
11
K
12
K
1
22
K
21
_
1
=
1

K
1
11
(762)
Hence, the base vectors in
2
is a linear combination of the base vectors in
1
. Then,
1
and

2
span the same subspace V
n1
, for which reason both bases will determine the same eigenval-
ues and eigenvectors.
The projected mass and stiffness matrices become

M=
T
2
M
2
=

K
1
11

T
1
M
1

K
1
11
=

K
1
11
M
11

K
1
11
(763)

K =
T
2
K
2
=

K
1
11

T
1
K
1

K
1
11
=

K
1
11
(764)
Then, the modal matrices Q
1
and Q
2
, obtained as solutions to (7-49) for the respective Ritz
bases, are seen to be related as
Q
1
=

K
1
11
Q
2
(765)
(7-65) implies that

=
1
Q
1
=
2
Q
2
.
Example 7.2: Rayleigh-Ritz analysis
Given the following mass- and stiffness matrices
M=
_

_
1
2
0 0
0 1 0
0 0
1
2
_

_ , K =
_

_
2 1 0
1 4 1
0 1 2
_

_ (766)
which have the exact eigensolutions, cf. Example 6.3
=
_

1
0 0
0
2
0
0 0
3
_

_ =
_

_
2 0 0
0 4 0
0 0 6
_

_ , =
_

(1)

(2)

(3)
_
=
_

2
2
1

2
2

2
2
0

2
2

2
2
1

2
2
_

_ (767)
A two dimensional Rayleigh-Ritz analysis is performed, where the static load vectors are estimated as
f =
_

_
1 0
0 0
0 1
_

_ (768)
46 Chapter 7 APPROXIMATE SOLUTION METHODS
The Ritz basis becomes
=
_

_
2 1 0
1 4 1
0 1 2
_

_
1
_

_
1 0
0 0
0 1
_

_ =
1
12
_

_
7 1
2 2
1 7
_

_ (769)
The projected mass and stiffness matrices become

M=
1
144
_
29 11
11 29
_
,

K =
1
12
_
7 1
1 7
_
(770)
The eigensolutions with modal masses normalized to 1 become
R =
_

1
0
0
2
_
=
_
2.4 0
0 4
_
, Q =
_
q
(1)
q
(2)
_
=
_
3

5
2
3

5
2
_
(771)
The solutions for the eigenvectors become

=
_

(1)

(2)
_
=
1
12
_

_
7 1
2 2
1 7
_

_
_
3

5
2
3

5
2
_
=
_

_
2

5
1
1

5
0
2

5
1
_

_
(772)
As seen from (7-71) and (7-72)
2
= 4 and

(2)
are calculated exact. This is so, because
(2)
is placed in the
subspace spanned by the selected Ritz basis as seen from the expansion

(2)
= 2
(1)
2
(2)
=
2
12
_

_
7
2
1
_

_
2
12
_

_
1
2
7
_

_ =
_

_
1
0
1
_

_ (773)
Next, a new analysis is performed, where the static load vectors are estimated as
f =
_

_
1 0
1 1
1 0
_

_ (774)
The Ritz basis becomes
=
_

_
2 1 0
1 4 1
0 1 2
_

_
1
_

_
1 0
1 1
1 0
_

_ =
1
6
_

_
5 1
4 2
5 1
_

_ (775)
The projected mass and stiffness matrices become
7.3 Error Analysis 47

M=
1
36
_
41 13
13 5
_
,

K =
1
3
_
7 2
2 1
_
(776)
The eigensolutions with modal masses normalized to 1 become
R =
_

1
0
0
2
_
=
_
2 0
0 6
_
, Q =
_
q
(1)
q
(2)
_
=
_
2
2

3

2
2

2
2
9

2
2
_
(777)
The solutions for the eigenvectors become

=
_

(1)

(2)
_
=
1
6
_

_
5 1
4 2
5 1
_

_
_
2
2

3

2
2

2
2
9

2
2
_
=
_

2
2

2
2

2
2

2
2

2
2

2
2
_

_ (778)
In this case
_

1
,

(1)
_
=
_

1
,
(1)
_
and
_

2
,

(2)
_
=
_

3
,
(3)
_
. This is so, because
(1)
and
(3)
are placed
in the sub-space spanned by the selected Ritz basis.
7.3 Error Analysis
Given a certain approximation to the jth eigen-pair (

j
,

(j)
), the error vector is dened as

j
=
_
K

j
M
_

(j)
(779)
Presuming that the eigenvalues have been normalized to unit modal mass, it follows from (6-15)
and (6-17) that
M=
_

1
_
T
I
1
, K =
_

1
_
T

1
(780)
Insertion of (7-80) into (7-79) provides

j
=
_

1
_
T
_

j
I
_

(j)

(j)
=
_

j
I
_
1

j
(781)
We shall use the Euclidean vector norm
E
and the Hilbert matrix norm
H
in the following.
For a denition of these quantities, see Box 7.3. The mentioned norms are compatible, so
_
_

(j)
_
_
E

_
_
_
_

j
I
_
1

T
_
_
_
H

E

_
_

_
_
H
_
_
_
_

j
I
_
1
_
_
_
H
_
_

T
_
_
H

E
(782)
48 Chapter 7 APPROXIMATE SOLUTION METHODS
The last statement of (7-82) follows from the dening properties of matrix norms, see Box 7.3.
(

j
I) is a diagonal matrix. Then, (

j
I)
1
is also a diagonal matrix with the components
(
k

j
)
1
, k = 1, . . . , n in the main diagonal. The eigenvalues of a diagonal matrix is equal
to the components in the main diagonal. Since, the Hilbert norm of a symmetric matrix is equal
to the numerical largest eigenvalue, it follows that
_
_
_
_

j
I
_
1
_
_
_
H
= max
k=1,...,n
_
1
|
k

j
|
_
=
1
min
k=1,...,n
|
k

j
|
(783)
The Hilbert norms of and
T
are identical as stated in Box 7.3. Further, it can be shown that,
see Box 7.4

2
H
=
1

1
(784)
where
1
is the lowest eigenvalue of M.
Then, (7-82), (7-83) and (7-84) provides the following bounding of the calculated eigenvalue

j
min
k=1,...,n
|
k

j
|
1

(j)

E
=
1

1
|
j
|
|

(j)
|
(785)
(7-85) is only of value, if
1
can be calculated relatively easily. This is the case for the special
eigenvalue problem, where M= I, which means that
1
= =
n
= 1, so (7-85) reduces to
min
k=1,...,n
|
k

j
|

j

(j)

E
=
|
j
|
|

(j)
|
(786)
7.3 Error Analysis 49
Box 7.3: Vector and matrix norms
A vector norm is a real number v associated to any n-dimensional vector v, which
fullls the following conditions
1. v > 0 for v = 0 and 0 = 0.
2. cv = |c| v for any complex or real number c.
3. u +v u +v (triangle inequality).
The most common vector norms are
1. p-norm (p ]0, [): v
p
=
_
n

i=1
|v
i
|
p
_
1/p
.
2. One norm (p = 1): v
1
=
n

i=1
|v
i
|.
3. Two norm (p = 2, Euclidean norm): v
2
= |v| =
_
n

i=1
|v
i
|
2
_
1/2
.
4. Innity norm (p = ): v

= max
i=1,...,n
|v
i
|.
where v
i
denotes the components of v. Given
v =
_

_
1
3
2
_

_

_

_
v
1/2
=
_
1 +

3 +

2
_
2
= 17.19
v
1
=
_
1 + 3 + 2
_
= 6
v
2
=
_
1
2
+ 3
2
+ 2
2
_
1/2
= 3.74
v

= max(1, 3, 2) = 3
(787)
A matrix norm is a real number A associated to any n n matrix A, which fullls the
following conditions
1. A > 0 for A = 0 and 0 = 0.
2. cA = |c| A for any complex or real number c.
3. A+B A +B (triangle inequality).
4. AB AB.
50 Chapter 7 APPROXIMATE SOLUTION METHODS
The most common matrix norms are
1. One norm: A
1
= max
j=1,...,n
n

i=1
|a
ij
|.
2. Innity norm: A

= max
i=1,...,n
n

j=1
|a
ij
|.
3. Euclidean norm: A
E
=
_
n

i=1
n

j=1
a
2
ij
_
1/2
.
4. Hilbert norm (spectral norm): A
H
=
_
max
i=1,...,n

i
_
1/2
, where
i
is the ith eigen-
value of AA
T
identical to the eigenvalues of A
T
A, so A
H
= A
T

H
.
a
ij
denotes the components of A. Notice, if A = A
T
the eigenvalues of AA
T
= A
2
becomes equal to the square of the eigenvalues of A. Given
A =
_
2 5
3 1
_
AA
T
=
_
29 11
11 10
_

_

_
A
1
= max(2 + 3, 5 + 1) = 6
A

= max(2 + 5, 3 + 1) = 7
A
E
=
_
4 + 25 + 9 + 1
_
1/2
= 6.24
A
H
=
_
13
2
_
3 +

5
_
= 5.83
(788)
A matrix norm
m
is said to be compatible to a given vector norm
v
, if
Av
v
A
m
v
v
(789)
It can be shown that the Hilbert matrix norm is compatible to the Euclidean vector norm,
that the one matrix norm is compatible to the one vector norm, and that the innity matrix
norm is compatible to the innity vector norm. However, the Euclidean matrix norm is
not compatible to the Euclidean vector norm.
7.3 Error Analysis 51
Box 7.4: Hilbert norm of modal matrix
Presuming that the columns of the modal matrix have been normalized to unit modal
mass, so m= I, it follows from (6-15) that
M=
_

T
_
1

1
M
1
=
T
(790)
From the denition of the Hilbert norm in Box 7.3 and (7-90) follows that
2
H
becomes
equal to the maximum eigenvalue of M
1
. If
1
,
2
, . . . ,
n
denote the eigenvalues
of M in ascending order, then the eigenvalues in ascending order of M
1
become
1

n
, . . . ,
1

2
,
1

1
, so the maximum eigenvalue of M
1
is equal to
1

1
. This proves (7-84).
Example 7.3: Bound on calculated eigenvalue
Given the mass- and stiffness matrices for the following special eigenvalue problem
M=
_

_
1 0 0
0 1 0
0 0 1
_

_ , K =
_

_
3 1 0
1 2 1
0 1 3
_

_ (791)
The eigensolutions with modal masses normalized to 1 are given as
=
_

1
0 0
0
2
0
0 0
3
_

_ =
_

_
1 0 0
0 3 0
0 0 4
_

_ , =
_

(1)

(2)

(3)
_
=
_

6
6

2
2

3
3
2

6
6
0

3
3

6
6

2
2

3
3
_

_ (792)
Assume that the following approximate solution , (

2
,

(2)
), has been calculated to the 2nd eigen-pair (
2
,
(2)
)

2
= 3.1 ,

(2)
=
_

_
1.0
0.2
1.0
_

(2)

= 1.4283 (793)
Then, the error vector becomes, cf. (7-79)

2
=
_
_
_
_

_
3 1 0
1 2 1
0 1 3
_

_ 3.1
_

_
1 0 0
0 1 0
0 0 1
_

_
_
_
_
_

_
1.0
0.2
1.0
_

_ =
_

_
0.30
0.22
0.10
_

= 0.3852 (794)
Since, M= I we may use the simplied result (7-86), which provides
|
2

2
|
0.3852
1.4283
= 0.26971 (795)
Actually, |
2

2
| = 0.1.
52 Chapter 7 APPROXIMATE SOLUTION METHODS
7.4 Exercises
7.1 Given the following mass- and stiffness matrices
M=
_

_
0 0 0
0 2 1
0 1 1
_

_
, K =
_

_
6 1 0
1 4 1
0 1 2
_

_
(a.) Perform a static condensation by the conventional procedure based on (7-5), (7-6),
and next by a Rayleigh-Ritz analysis with the Ritz basis given by (7-62).
7.2 Given the following mass- and stiffness matrices
M=
_

_
2 0 0
0 2 1
0 1 1
_

_
, K =
_

_
6 1 0
1 4 1
0 1 2
_

_
(a.) Calculate approximate eigenvalues and eigenmodes by a Rayleigh-Ritz analysis us-
ing the following Ritz basis
= [
(1)

(2)
] =
_

_
1 1
1 1
1 1
_

_
7.3 Consider the mass- and stiffness matrices in Exercise 7.2, and let
v =
_

_
1
1
1
_

_
(a.) Calculate the vector

(1)
= K
1
Mv, and next

1
=
_

(1)
_
, as approximate solu-
tions to the lowest eigenmode and eigenvalue.
(b.) Establish the error bound for the obtained approximation to the lowest eigenvalue.
CHAPTER 8
VECTOR ITERATION METHODS
8.1 Introduction
In structural dynamics only a small number n
1
of the lowest eigen-pairs,
_

1
,
(1)
_
,
_

2
,
(2)
_
,
. . . ,
_

n
1
,
(n
1
)
_
, where n
1
n, are of structural signicance. Hence, there is a need for meth-
ods, which concentrate on the determination of the low-order modes. This is the underlying
motivation for most of the methods described in the following chapters.
It should be noticed that if
j
is known, then
(j)
can be determined from the linear, homoge-
neous equations, cf. (6-5)
_
K
j
M
_

(j)
= 0 (81)
If
j
is an eigenvalue, the coefcient matrix K
j
Mis singular. Then,
(j)
can be determined
within a common factor by solving a linear system of n 1 equations as illustrated in Example
6.3.
On the other hand, if
(j)
is known, the eigenvalue
j
can be determined from the Rayleigh
quotient, cf. (7-25)

j
=

(j) T
K
(j)

(j) T
M
(j)
(82)
Since, the eigenvalues are determined as solutions to the characteristic equation (6-7), which
can only be solved analytically for n 4, all solution methods for practical problems relies
implicitly or explicitly on iterative numerical schemes.
Iterative numerical solution methods may be classied in the following categories
Vector iteration methods operate directly on the generalized eigenvalue problem (8-1), so that
a certain eigenvalue and associated eigenmode are determined iteratively with increasing accu-
racy. Vector iteration methods are considered both in Chapters 8 and 10.
53
54 Chapter 8 VECTOR ITERATION METHODS
Similarity transformation methods transform the generalized eigenvalue problem via a sequence
of similarity transformations, so the transformed mass and stiffness matrices eventually attain a
diagonal form. These methods are considered in Chapter 9.
Characteristic polynomial iteration methods operates directly or indirectly on the characteristic
equation (6-7). These methods are dealt with in Section 10.4.
8.2 Inverse and Forward Vector Iteration
The principle in inverse vector iteration may be explained in the following way. Given a start
vector,
0
. Based on the generalized eigenvalue problem (8-1), we may then calculate a new
vector
1
as follows
K
1
= M
0

1
= K
1
M
0
= A
0
(83)
where
A = K
1
M (84)
Clearly, if
0
=
(j)
is an eigenmode, then
1
=
1

0
. If not so, we may consider
1
as another, and hopefully better approximation to the eigenmode. Next, based on
1
we may
proceed to calculate a still better approximation
2
from
K
2
= M
1

2
= A
1
(85)
This proceed may be continued until the convergence criteria
k+1
=
1

k
is fullled with
sufcient accuracy.
The inverse vector iteration algorithm may then be summarized as follows
Box 8.1: Inverse vector iteration algorithm
Given start vector
0
, which needs not be normalized to unit modal mass. Repeat the
following items for k = 0, 1, . . .
1. Calculate

k+1
= A
k
.
2. Normalize solution vector to unit modal mass, so
T
k+1
M
k+1
= 1:

k+1
=

k+1
_

T
k+1
M

k+1
8.2 Inverse and Forward Vector Iteration 55
Obviously, the algorithm requires that the stiffness matrix is non-singular, so the inverse K
1
exists. By contrast the mass matrix needs not be non-singular as is the case in Example 8.1 be-
low. After convergence the lowest eigenvalue is most accurately calculated from the Rayleigh
quotient (7-25).
In case the lowest eigenvalue is simple, i.e. that
1
<
2
, the inverse iteration algorithm con-
verges towards the lowest eigenpair (
1
,
(1)
). The solution vector obtained after the kth itera-
tion step,
k
, is an n-dimensional vector, which may be expanded in the basis formed by the n
undamped eigenmodes as follows

k
= q
1,k

(1)
+ q
2,k

(2)
+ + q
n,k

(n)
= q
k
= [
(1)

(2)

(n)
] , q
k
=
_

_
q
1,k
q
2,k
.
.
.
q
n,k
_

_
_

_
(86)
The components of the vector q
k
are denoted the modal coordinates of the vector
k
. The
expansion (8-6) should be considered as formal, since the base vectors
(1)
,
(2)
, . . . ,
(n)
are unknown. Actually, the whole analysis deals with the determination of these quantities.
Similarly, the expansion for

k+1
reads

k+1
= q
k+1
(87)
where q
k+1
denotes a vector of modal coordinates of

k+1
. Insertion of (8-6) and (8-7) into the
iteration algorithm provides
K q
k+1
= Mq
k

T
K q
k+1
=
T
Mq
k

q
k+1
= q
k
(88)
where the orthogonality properties (6-15) and (6-17) have been used, and the eigenmodes are
assumed to be normalized to unit modal mass. The diagonal matrix is given by (6-11). As
k convergence implies that
1

j
q
k+1
= q
k
=
(j)
, where
(j)
signies the eigenmode in
the modal space. This means that
56 Chapter 8 VECTOR ITERATION METHODS

(j)
=
j

(j)

1
0 0
0
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
n
_

_
_

2
.
.
.

n
_

_
=
j
_

2
.
.
.

n
_

(j)
=
_

1
.
.
.

j1

j+1
.
.
.

n
_

_
=
_

_
0
.
.
.
0
1
0
.
.
.
0
_

_
(89)
The jth component of
(j)
is equal to 1, and the remaining components are zero.
Let the start vector be given as q
0
= [1, . . . , 1]
T
. Then, the following sequence of results may
be calculated from (8-8)
q
1
=
1
q
0
=
_

_
1

1
0 0
0
1

2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
1

n
_

_
_

_
1
1
.
.
.
1
_

_
=
_

_
1

1
1

2
.
.
.
1

n
_

q
2
=
1
q
1
=
_

_
1

1
0 0
0
1

2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
1

n
_

_
_

_
1

1
1

2
.
.
.
1

n
_

_
=
_

_
1

2
1
1

2
2
.
.
.
1

2
n
_

_

q
k
=
1
q
k1
=
_

_
1

1
0 0
0
1

2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
1

n
_

_
_

_
1

k1
1
1

k1
2
.
.
.
1

k1
n
_

_
=
_

_
1

k
1
1

k
2
.
.
.
1

k
n
_

_
=
1

k
1
_

_
1
_

2
_
k
.
.
.
_

n
_
k
_

_
_

_
(810)
If
1
<
2

n
it follows from (8-10) that
8.2 Inverse and Forward Vector Iteration 57
lim
k

k
1
q
k
=
_

_
1
0
.
.
.
0
_

_
=
(1)
(811)
Hence, the algorithm converge to
(1)
in the modal space. The corresponding convergence to

(1)
then takes place in the physical space.
As seen from (8-11), |q
k
| 0 if
1
> 1, and |q
k
| if
1
< 1. This is the rationale behind
the normalization to unit modal mass of the iteration vector, performed at each iteration step in
the algorithm in Box 8.1.
The relative error of the iteration vector after the kth iteration step is dened from

1,k
=

k
1
q
k

(1)

(1)

k
1
q
k

(1)

2
_
2k
+
_

3
_
2k
+ +
_

n
_
2k
=
_

2
_
k

1 +
_

3
_
2k
+ +
_

n
_
2k
(812)
From (8-12) follows, that the relative error at large values of k has the magnitude
1,k

_

2
_
k
.
Based on the asymptotic behavior of the relative error, the convergence rate is dened from
r
1
= lim
k

1,k+1

1,k
= lim
k

k+1
1
q
k+1

(1)

k
1
q
k

(1)

=
lim
k

2
_
1 +
_

3
_
2k+2
+ +
_

n
_
2k+2
_
1 +
_

3
_
2k
+ +
_

n
_
2k
=

1

2
(813)
The last statement of (8-13) presumes that the eigenvalue
2
is simple, i.e. that
2
<
3
. It
follows from (8-12) that the smaller is the fraction

1

2
the faster will the convergence to the rst
eigenmode be. Hence, the convergence rate as dened by (8-13) should be small (despite lin-
guistic logics suggests the opposite). An vector iteration scheme, where the convergence rate is
proportional to

1

2
is said to have linear convergence. Hence, inverse vector iteration has linear
convergence.
The Rayleigh quotient based on
k
= q
k
becomes
(q
k
) =

T
k
K
k

T
k
M
k
=
q
T
k

T
Kq
k
q
T
k

T
Mq
k
=
q
T
k
q
k
q
T
k
q
k
(814)
58 Chapter 8 VECTOR ITERATION METHODS
The relative error of the Rayleigh quotient after the kth iteration step is dened from

2,k
=
(q
k
)
1

1
(815)
From (8-10) follows that
q
T
k
q
k
=
_

_
1

k
1
1

k
2
.
.
.
1

k
n
_

_
T _

1
0 0
0
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
n
_

_
_

_
1

k
1
1

k
2
.
.
.
1

k
n
_

_
=
1

2k1
1
+
1

2k1
2
+
1

2k1
3
+ +
1

2k1
n
q
T
k
q
k
=
_

_
1

k
1
1

k
2
.
.
.
1

k
n
_

_
T
_

_
1

k
1
1

k
2
.
.
.
1

k
n
_

_
=
1

2k
1
+
1

2k
2
+
1

2k
3
+ +
1

2k
n
_

_
(816)
Then, (8-15) may be written as

2,k
=
1

1
1

2k1
1
+
1

2k1
2
+
1

2k1
3
+ +
1

2k1
n
1

2k
1
+
1

2k
2
+
1

2k
3
+ +
1

2k
n
1 =
1 +
_

2
_
2k1
+
_

3
_
2k1
+ +
_

n
_
2k1
1 +
_

2
_
2k
+
_

3
_
2k
+ +
_

n
_
2k
1 =
_

2
_
2k1
1

1

2
+
_

3
_
2k1
_
1

1

3
_
+ +
_

n
_
2k1
_
1

1

n
_
1 +
_

2
_
2k
+
_

3
_
2k
+ +
_

n
_
2k
=
_

2
_
2k1
_
1

1

2
+
_
(817)
where the dots denote terms, which converge to zero as k . (8-17) shows that the relative
error of the Rayleigh quotient at large values of k has the magnitude
2,k

_

2
_
2k1
. Hence,
the relative error on the components of the eigenmode at a certain iteration step, as measured
by
2,k
, is signicantly larger than the relative error on the eigenvalue estimate, as determined
by the Rayleigh quotient.
The convergence rate of the Rayleigh quotient is dened from
8.2 Inverse and Forward Vector Iteration 59
r
2
= lim
k

2,k+1

2,k
= lim
k
_

2
_
2k+1
_
1

1

2
+
_
_

2
_
2k1
_
1

1

2
+
_
=
_

2
_
2
(818)
Hence, the Rayleigh quotient has quadratic convergence in inverse vector iteration.
Example 8.1: Inverse vector iteration
Consider the generalized eigenvalue problem dened by the mass and stiffness matrices in Example 7.1. Calculate
the lowest eigenvalue and eigenvector by inverse vector iteration using the inverse iteration algorithm described in
Box 8.1 with the start vector

0
=
_

_
1
1
1
1
_

_
(819)
The matrix A becomes, cf. (8-5), (7-16)
A =
_

_
2 1 0 0
1 2 1 0
0 1 2 1
0 0 1 1
_

_
1
_

_
0 0 0 0
0 2 0 0
0 0 0 0
0 0 0 1
_

_
=
_

_
0 2 0 1
0 4 0 2
0 4 0 3
0 4 0 4
_

_
(820)
At the 1st and 2nd iteration step the following calculations are performed
_

1
=
_

_
0 2 0 1
0 4 0 2
0 4 0 3
0 4 0 4
_

_
_

_
1
1
1
1
_

_
=
_

_
3
6
7
8
_

T
1
M

1
= 136

1
=
1

136
_

_
3
6
7
8
_

_
=
_

_
0.25725
0.51450
0.60025
0.68599
_

_
(821)
_

2
=
_

_
0 2 0 1
0 4 0 2
0 4 0 3
0 4 0 4
_

_
_

_
0.25725
0.51450
0.60025
0.68599
_

_
=
_

_
1.7150
3.4300
4.1160
4.8020
_

T
2
M

2
= 46.588

2
=
1

46.588
_

_
1.7150
3.4300
4.1160
4.8020
_

_
=
_

_
0.25126
0.50252
0.60302
0.70353
_

_
(822)
60 Chapter 8 VECTOR ITERATION METHODS
The Rayleigh quotient based on
2
provides the following estimate for
1
, cf. (7-25)
(
2
) =
_

_
0.25126
0.50252
0.60302
0.70353
_

_
T
_

_
2 1 0 0
1 2 1 0
0 1 2 1
0 0 1 1
_

_
_

_
0.25126
0.50252
0.60302
0.70353
_

_
_

_
0.25126
0.50252
0.60302
0.70353
_

_
T
_

_
0 0 0 0
0 2 0 0
0 0 0 0
0 0 0 1
_

_
_

_
0.25126
0.50252
0.60302
0.70353
_

_
= 0.1464646 (823)
The exact solutions are given as, cf. (7-24)

1
=
1
2

2
4
= 0.1464466 ,
(1)
=
_

_
1
4
1
2
1
4
+

2
4

2
2
_

_
=
_

_
0.25000
0.50000
0.60355
0.70711
_

_
(824)
The relative errors,
1
and
2
, on the calculation of the eigenvalue and the 1st component of
(1)
becomes

1,2
=
|
2

(1)
|
|
(1)
|
=
0.00458
1.0848
= 4.22 10
3

2,2
=
(
2
)
1

1
=
0.1464646 0.1464466
0.1464466
= 1.23 10
4
_

_
(825)
As seen the relative error on the components of the eigenmode is signicantly larger than the error on the the
Rayleigh quotient.
The generalized eigenvalue problem (6-5) may be reformulated on the form
M
(1)
=
1
MK
1
M
(1)

(1)
=
1
MK
1

(1)

K
1

(1)
=
1
K
1
MK
1

(1)
,
(1)
= M
(1)
(826)
From (8-26) the following Rayleigh quotient may be dened
(v) =
v
T
K
1
v
v
T
K
1
MK
1
v
(827)
If v =
(1)
= M
(1)
then (8-4) provides the limit
1
. An inverse vector iteration procedure
based on the formulation (8-26), (8-27) has been indicated in Box 8.2. The lowest eigenmode
8.2 Inverse and Forward Vector Iteration 61

(1)
can only be retrieved after convergence, if M
1
exists.
Box 8.2: Alternative inverse vector iteration algorithm
Given start vector
0
. Repeat the following items for k = 0, 1, . . .
1. Calculate v
k+1
= K
1

k
.
2. Calculate

k+1
= Mv
k+1
.
3. Calculate the Rayleigh quotient (8-29) for the test vector
k
by

k
_
=
v
T
k+1

k
v
T
k+1

k+1
_
=

T
k
K
1

T
k
K
1
MK
1

k
_
4. Normalize the new solution vector, so
T
k+1
K
1
MK
1

k+1
= 1

k+1
=

k+1
_
v
T
k+1

k+1
_
=

k+1
_

T
k
K
1
MK
1

k
_
5. After convergence the lowest eigenmode at the same iteration step is calculated from

k+1
= M
1

k+1
.
Example 8.2: Alternative inverse vector iteration
Consider the generalized eigenvalue problem dened by the mass and stiffness matrices in Example 6.2. Calculate
the lowest eigenvalue and eigenvector by inverse vector iteration using the alternative inverse vector iteration
algorithm in Box 8.2 with the start vector

0
=
_

_
1
1
1
_

_ (828)
The inverse stiffness matrix becomes, cf. (6-44)
K
1
=
_

_
2 1 0
1 4 1
0 1 2
_

_
1
=
1
12
_

_
7 2 1
2 4 2
1 2 7
_

_ (829)
At the 1st and 2nd iteration steps the following calculations are performed
62 Chapter 8 VECTOR ITERATION METHODS
_

_
v
1
=
1
12
_

_
7 2 1
2 4 2
1 2 7
_

_
_

_
1
1
1
_

_ =
1
6
_

_
5
4
5
_

1
=
1
6
_

_
1
2
0 0
0 1 0
0 0
1
2
_

_
_

_
5
4
5
_

_ =
1
12
_

_
5
8
5
_

_ , v
T
1

1
=
1
6 12
_

_
5
4
5
_

_
T
_

_
5
8
5
_

_ =
41
36

0
_
=
v
T
1

0
v
T
1

1
=
36
6 41
_

_
5
4
5
_

_
T
_

_
1
1
1
_

_ =
84
41
= 2.0488

1
=

1
_
v
T
1

1
=
1
12
_
41
36
_

_
5
8
5
_

_ =
_

_
0.3904
0.6247
0.3904
_

_
(830)
_

_
v
2
=
1
12
_

_
7 2 1
2 4 2
1 2 7
_

_
_

_
0.3904
0.6247
0.3904
_

_ =
_

_
0.3644
0.3384
0.3644
_

2
=
_

_
1
2
0 0
0 1 0
0 0
1
2
_

_
_

_
0.3644
0.3384
0.3644
_

_ =
_

_
0.1822
0.3384
0.1822
_

_ , v
T
2

2
=
_

_
0.3644
0.3384
0.3644
_

_
T
_

_
0.1822
0.3384
0.1822
_

_ = 0.2473

1
_
=
v
T
2

1
v
T
2

2
=
1
0.2473
_

_
0.3644
0.3384
0.3644
_

_
T
_

_
0.3904
0.6247
0.3904
_

_ = 2.0055

2
=

2
_
v
T
2

2
=
1

0.2473
_

_
0.1822
0.3384
0.1822
_

_ =
_

_
0.3664
0.6805
0.3664
_

_
(831)
The lowest eigenvector at the end of the 2nd iteration step becomes

2
= M
1

2
=
_

_
1
2
0 0
0 1 0
0 0
1
2
_

_
1
_

_
0.3664
0.6805
0.3664
_

_ =
_

_
0.7328
0.6805
0.7328
_

_ (832)
The exact solutions are given as, cf. (6-54)

(1)
=
_

2
2

2
2

2
2
_

_ =
_

_
0.7071
0.7071
0.7071
_

_ ,
1
= 2 (833)
As for the simple formulation of the inverse vector iteration algorithm the convergence towards the exact eigen-
value takes place as a monotonously decreasing sequence of upper values, (
0
), (
1
), . . ..
8.2 Inverse and Forward Vector Iteration 63
The principle in forward vector iteration may also be explained based on the eigenvalue problem
(8-1). Given a start vector,
0
, a new vector
1
may be calculated as follows
K
0
= M
1

1
= M
1
K
0
= B
0
(834)
where
B = M
1
K (835)
Clearly, if
0
=
(j)
is an eigenmode, then
1
=
j

0
. If not so, a new and better approxi-
mation
2
may be calculated based on
1
as follows

2
= B
1
(836)
The process may be continued until converge is obtained. The forward vector iteration algo-
rithm may be summarized as follows
Box 8.3: Forward vector iteration algorithm
Given start vector
0
, which needs not be normalized to unit modal mass. Repeat the
following items for k = 0, 1, . . .
1. Calculate

k+1
= B
k
.
2. Normalize solution vector to unit modal mass, so
T
k+1
M
k+1
= 1

k+1
=

k+1
_

T
k+1
M

k+1
Obviously, the algorithm requires that the mass matrix is non-singular, so the inverse M
1
exists. By contrast the stiffness matrix needs not be non-singular. After convergence the eigen-
value is calculated from the Rayleigh quotient.
In case the largest eigenvalue is simple, i.e. that
n1
<
n
, the forward iteration algorithm
converges towards the largest eigenpair
_

n
,
(n)
_
. The convergence rate of the eigenmode
estimate is linear, and the convergence rate of the Rayleigh quotient is quadratic in the fraction

n1

n
. A proof of this has been given in Section 8.3.
64 Chapter 8 VECTOR ITERATION METHODS
Example 8.3: Forward vector iteration
Consider the generalized eigenvalue problem dened by the mass and stiffness matrices in Example 6.2. Calculate
the largest eigenvalue and eigenvector by forward vector iteration using the forward vector iteration algorithm in
Box 8.3 with the start vector

0
=
_

_
1
0
0
_

_ (837)
The matrix B becomes, cf. (8-35), (6-44)
B =
_

_
1
2
0 0
0 1 0
0 0
1
2
_

_
1
_

_
2 1 0
1 4 1
0 1 2
_

_ =
_

_
4 2 0
1 4 1
0 2 4
_

_ (838)
At the 1st and 2nd iteration step the following calculations are performed
_

1
=
_

_
4 2 0
1 4 1
0 2 4
_

_
_

_
1
0
0
_

_ =
_

_
4
1
0
_

T
1
M

1
= 9

1
=
1

9
_

_
4
1
0
_

_ =
_

_
1.3333
0.3333
0
_

_
(839)
_

2
=
_

_
4 2 0
1 4 1
0 2 4
_

_
_

_
1.3333
0.3333
0
_

_ =
_

_
6.0000
2.6667
0.6667
_

T
2
M

2
= 25.333

2
=
1

25.333
_

_
6.0000
2.6667
0.6667
_

_ =
_

_
1.1921
0.5298
0.1325
_

_
(840)
The Rayleigh quotient based on
2
becomes
(
2
) =
_

_
1.1921
0.5298
0.1325
_

_
T
_

_
2 1 0
1 4 1
0 1 2
_

_
_

_
1.1921
0.5298
0.1325
_

_
_

_
1.1921
0.5298
0.1325
_

_
T
_

_
1
2
0 0
0 1 0
0 0
1
2
_

_
_

_
1.1921
0.5298
0.1325
_

_
= 5.404 (841)
The results for the iteration vector and the Rayleigh quotient in the succeeding iteration steps become
8.3 Shift in Vector Iteration 65

3
=
_

_
1.0622
0.6276
0.2897
_

_ , (
3
) = 5.697

4
=
_

_
0.9584
0.6726
0.4204
_

_ , (
4
) = 5.855

5
=
_

_
0.8811
0.6923
0.5149
_

_ , (
5
) = 5.933
_

_
(842)
The exact solutions becomes, cf. (6-54)

(3)
=
_

2
2

2
2

2
2
_

_ =
_

_
0.7071
0.7071
0.7071
_

_ ,
3
= 6 (843)
The relative slow convergence of the algorithm to the exact solution is because the fraction

2

3
=
4
6
is relatively
high. Theoretically the relatively errors of the Rayleigh quotient after 5 iterations should be of the magnitude, cf.
(8-17)

2,5

_
4
6
_
251
_
1
4
6
_
= 0.0087 (844)
Actually, the error is slightly larger, namely

2,5
=
6 5.933
6
= 0.0112 (845)
8.3 Shift in Vector Iteration
Shift on the stiffness matrix in the eigenvalue problem (8-1) as indicated by (6-102) may be
appropriate both in relation to inverse and forward vector iteration, either in order to obtain
convergence to other eigen-pairs than (
1
,
(1)
) or (
n
,
(n)
), or to improve the convergence
rate of the iteration process.
Reference til formel (6-102). K is replaced by the matrix

K = K M. Algorithms in Box
8.1 and 8.3 are unchanged, if the matrices Aand B in (8-6) and (8-35) are redened as follows
A =

K
1
M (846)
66 Chapter 8 VECTOR ITERATION METHODS
B = M
1

K (847)
where

Kis dened by (6-103).
The Rayleigh quotient estimate of the eigenvalue
j
after the kth iteration step becomes

j
= (
k
) + =

T
k

K
k

T
k
M
k
+ (848)
In the modal space the inverse vector iteration with shift on the stiffness matrix can be written
as, cf. (8-8)

K q
k+1
= Mq
k

T
_
KM
_
q
k+1
=
T
Mq
k

_
I
_
q
k+1
= q
k
(849)
(8-49) is identical to (8-8), if
j
is replaced with
j
. With the same start vector q
0
=
[1, . . . , 1]
T
as in (8-10), the solution vector after the kth iteration step becomes, cf. (8-10)
q
k
=
_

_
1
(
1
)
k
.
.
.
1
(
j1
)
k
1
(
j
)
k
1
(
j+1
)
k
.
.
.
1
(
n
)
k
_

_
=
1
(
j
)
k
_

_
_

_
k
.
.
.
_

j

j1

_
k
1
_

j

j+1

_
k
.
.
.
_

_
k
_

_
(850)
where the jth eigenvalue fullls
|
j
| = min
i=1,...,n
|
i
| (851)
It then follows from (8-50) that
8.3 Shift in Vector Iteration 67
lim
k
_

_
k
q
k
=
_

_
0
.
.
.
0
1
0
.
.
.
0
_

_
=
(j)
(852)
Hence, for a value of fullling (8-51) the algorithm converge to
(j)
in the modal space. In
physical space the algorithm then converge to
(j)
. The convergence rate of the eigenmode
becomes, cf. (8-13)
r
1
= max
_

j1

j+1

_
(853)
Then, the corresponding convergence rate of the Rayleigh quotient is given as r
2
= r
2
1
.
0
0
0

j1

j1

j1

j+1

j+1

j+1

n1

n1

n1

n
a)
b)
c)

Fig. 81 Optimal position of shift parameter at inverse vector iteration. a) Convergence towards
j
. b) Conver-
gence towards
1
. c) Convergence towards
n
.
In case inverse vector iteration towards the jth eigenmode is attempted, the shift parameter
should be place in the vicinity of
j
as shown on Fig. 8.1a in order to obtain a small con-
vergence rate. It should be emphasized that any inverse vector iteration with shift should be
accompanied with a Sturm sequence check to insure that the calculated eigenvalue is indeed the

j
.
At inverse vector iteration towards the lowest eigenmode the convergence rate r
1
= |
1

|/|
2
| should be minimized. Hence, should be placed close to but below
1
, as shown
on Fig. 8.1b.
At inverse vector iteration towards the highest eigenmode the convergence rate r
1
= |
n1

|/|
n
| should be minimized. Hence, should be placed close to but above
n
, as shown
68 Chapter 8 VECTOR ITERATION METHODS
on Fig. 8.1c.
In case of forward with iteration with shift, (8-49) provides the solution after k iterations
q
k
=
_

_
(
1
)
k
.
.
.
(
j1
)
k
(
j
)
k
(
j+1
)
k
.
.
.
(
n
)
k
_

_
= (
j
)
k
_

_
_

_
k
.
.
.
_

j1

_
k
1
_

j+1

_
k
.
.
.
_

_
k
_

_
(854)
where the jth eigenvalue fullls
|
j
| = max
i=1,...,n
|
i
| (855)
Clearly, (8-55) has the solutions
j
=
1
or
j
=
n
. The former occurs, if is closest to
n
,
and the latter if is closest to
1
. Then, it follows form (8-54) that
lim
k
1
_

_
k
q
k
=
(j)
, j = 1, n (856)
For a value of fullling (8-55) the algorithm converge to
(j)
in the modal space, or to
(j)
in the physical space. Forward iteration with shift always converge to either the lowest or the
highest eigenmode depending on the magnitude of the shift parameter. The convergence rate of
the iteration vector becomes
r
1
= max
_

, . . . ,

j1

j+1

, . . . ,

_
(857)
Shift in forward vector iteration is not as useful as in inverse vector iteration, because the optimal
choice of the shift parameter is more difcult to specify. At forward vector iteration towards the
highest eigenmode the optimal shift parameter is typically placed somewhere in the middle of
the eigenvalue spectrum. Especially for = 0, (8-57) becomes
r
1
=

n1

n
(858)
as stated in Section 8.2 on forward iteration without shift.
8.3 Shift in Vector Iteration 69
Example 8.4: Forward vector iteration with shift
The problem in Example 8.3 is considered again. However, now a shift with = 3 is performed on the stiffness
matrix.
The matrix

Kbecomes, cf. (6-103), (6-44)

K =
_

_
2 1 0
1 4 1
0 1 2
_

_ 3
_

_
1
2
0 0
0 1 0
0 0
1
2
_

_ =
_

_
1
2
1 0
1 1 1
0 1
1
2
_

_ (859)
The matrix B becomes, cf. (8-47), (6-44)
B =
_

_
1
2
0 0
0 1 0
0 0
1
2
_

_
1
_

_
1
2
1 0
1 1 1
0 1
1
2
_

_ =
_

_
1 2 0
1 1 1
0 2 1
_

_ (860)
At the 1st and 2nd iteration step the following calculations are performed
_

1
=
_

_
1 2 0
1 1 1
0 2 1
_

_
_

_
1
0
0
_

_ =
_

_
1
1
0
_

T
1
M

1
= 1.5

1
=
1

1.5
_

_
1
1
0
_

_ =
_

_
0.8165
0.8165
0
_

_
(861)
_

2
=
_

_
1 2 0
1 1 1
0 2 1
_

_
_

_
0.8165
0.8165
0
_

_ =
_

_
2.4495
1.6330
1.6330
_

T
1
M

1
= 7

2
=
1

7
_

_
2.4495
1.6330
1.6330
_

_ =
_

_
0.9258
0.6172
0.6172
_

_
(862)
The Rayleigh quotient estimate of
3
based on
2
becomes, cf. (8-48)

3
= (
2
) + 3 =
_

_
0.9258
0.6172
0.6172
_

_
T
_

_
1
2
1 0
1 1 1
0 1
1
2
_

_
_

_
0.9258
0.6172
0.6172
_

_
_

_
0.9258
0.6172
0.6172
_

_
T
_

_
1
2
0 0
0 1 0
0 0
1
2
_

_
_

_
0.9258
0.6172
0.6172
_

_
+ 3 = 2.9048 + 3 = 5.9048 (863)
The results for the iteration vector and the eigenvalue estimate in the succeeding iteration steps become
70 Chapter 8 VECTOR ITERATION METHODS

3
=
_

_
0.7318
0.7318
0.6273
_

_ ,

3
= 5.9891

4
=
_

_
0.7331
0.6982
0.6982
_

_ ,

3
= 5.9988

5
=
_

_
0.7100
0.7100
0.6983
_

_ ,

3
= 5.9999
_

_
(864)
The results in (8-64) should be compared to those in (8-42). As seen the convergence of the shifted problem is
much faster.
8.4 Inverse Vector Iteration with Rayleigh Quotient Shift
As demonstrated in Section 8.3 the convergence properties are improved if inverse vector a shift
on the stiffness matrix is applied, where the shift parameter
1
. The idea in the present
section is to update the shift parameter at each iteration step with the most recent estimate of
the lowest eigenvalue. Assume, that an estimate of the eigenvalue

1
is known after the kth
iteration step. Then, a shift with the parameter
k
=

1
is performed, so a new un-normalized
eigenmode estimate is calculated at the (k + 1)th iteration step from

k+1
=
_
K
k
M
_
1
M
k
(865)
where
k
is the normalized estimate of the eigenmode after the kth iteration step.
A new estimate of the eigenvalue, and hence the shift parameter, then follows from (8-48)

k+1
=

T
k+1
_
K
k
M
_

k+1

T
k+1
M

k+1
+
k
(866)
The convergnce towards
_

1
,
(1)
_
is not safe, since the rst shift determined by
1
may cause
convergence towards other eigen-pairs, especially if the rst and second eigenvalue are close.
For this reason the rst couples of iteration steps are often performed without shift. When the
convergence towards the rst eigen-pair takes place, the convergence rate of the Rayleigh quo-
tient estimate of the eigenvalue will be cubic, i.e. r
2
= (

2
)
3
. Additionally, the length of the
converge process is very much dependent on the start vector, as demonstrated in the succeeding
Example 8.5. Even though the convergence may be fast it should be realized that the process
requires inversion of the matrix K
k
M at each iteration step, which may be expensive for
8.4 Inverse Vector Iteration with Rayleigh Quotient Shift 71
large systems.
Box 8.4: Algorithm for inverse vector iteration with Rayleigh quotient shift
Given start vector
0
, which needs not be normalized to unit modal mass, and set the
initial shift to
0
= 0. Repeat the following items for k = 0, 1, . . .
1. Calculate

k+1
=
_
K
k
M
_
1
M
k
.
2. Calculate new shift parameter (new estimate on the eigenvalue) from the Rayleigh
quotient estimate based on

k+1
by

k+1
=

T
k+1
_
K
k
M
_

k+1

T
k+1
M

k+1
+
k
_
estimate on
1
_
3. Normalize the new solution vector to unit modal mass

k+1
=

k+1
_

T
k+1
M

k+1
Example 8.5: Inverse vector iteration with Rayleigh quotient shift
Consider the generalized eigenvalue problem dened by the mass and stiffness matrices in Example 6.2. Calculate
the lowest eigenvalue and eigenvector by inverse vector iteration with Rayleigh quotient shift with the start vector

0
=
_

_
1
0
0
_

_ (867)
At the 1st and 2nd iteration step the following calculations are performed
72 Chapter 8 VECTOR ITERATION METHODS
_

K =
_

_
2 1 0
1 4 1
0 1 2
_

_ 0
_

_
1
2
0 0
0 1 0
0 0
1
2
_

_ =
_

_
2 1 0
1 4 1
0 1 2
_

1
=
_

_
2 1 0
1 4 1
0 1 2
_

_
1
_

_
1
2
0 0
0 1 0
0 0
1
2
_

_
_

_
1
0
0
_

_ =
_

_
0.2917
0.0833
0.0417
_

T
1
M

1
= 0.05035

1
=
1
0.05035
_

_
0.2917
0.0833
0.0417
_

_
T
_

_
2 1 0
1 4 1
0 1 2
_

_
_

_
0.2917
0.0833
0.0417
_

_ + 0 = 2.8966

1
=
1

0.05035
_

_
0.2917
0.0833
0.0417
_

_ =
_

_
1.2999
0.3714
0.1857
_

_
(868)
_

K =
_

_
2 1 0
1 4 1
0 1 2
_

_ 2.8966
_

_
1
2
0 0
0 1 0
0 0
1
2
_

_ =
_

_
0.5517 1.0000 0.0000
1.0000 1.1034 1.0000
0.0000 1.0000 0.5517
_

2
=
_

_
0.5517 1.0000 0.0000
1.0000 1.1034 1.0000
0.0000 1.0000 0.5517
_

_
1
_

_
1
2
0 0
0 1 0
0 0
1
2
_

_
_

_
1.2999
0.3714
0.1857
_

_ =
_

_
0.0567
0.6812
1.0664
_

T
2
M

2
= 1.0342

2
=
1
1.0342
_

_
0.0567
0.6812
1.0664
_

_
T
_

_
0.5517 1.0000 0.0000
1.0000 1.1034 1.0000
0.0000 1.0000 0.5517
_

_
_

_
0.0567
0.6812
1.0664
_

_ + 2.8966 = 2.5206

2
=
1

1.0342
_

_
0.0567
0.6812
1.0664
_

_ =
_

_
0.0557
0.6698
1.0486
_

_
(869)
The results for the iteration vector and the eigenvalue estimate in the succeeding iteration steps become

3
=
_

_
0.9011
0.6830
0.5049
_

_ ,
3
= 2.0793

4
=
_

_
0.6985
0.7073
0.7152
_

_ ,
4
= 2.0001
_

_
(870)
Despite the shifts the convergence is very slow during the 1st and 2nd iteration step. Not until the 3rd and 4th step
a fast speed-up of the convergence takes place. This is due to the poor guess of the start vector.
8.5 Vector Iteration with Gram-Schmidt Orthogonalization 73
8.5 Vector Iteration with Gram-Schmidt Orthogonaliza-
tion
Inverse vector iteration or forward vector iteration with Gram-Schmidt orthogonalization is
used, when more than the eigen-pairs
_

1
,
(1)
_
or
_

n
,
(n)
_
are wanted.
Assume, that the eigenmodes
(1)
,
(2)
, . . . ,
(m)
, m < n, have been determined. Next, the
eigenmode
(m+1)
is wanted using inverse vector iteration by means of the algorithm in Box
8.1. In order to prevent the algorithm to converge toward
(1)
a cleansing of the vector
k+1
for information about the rst m eigenmodes is performed by a so-called Gram-Schmidt or-
thogonalization . In this respect the following modied iteration vector iteration algorithm is
considered

k+1
=

k+1

j=1
c
j

(j)
(871)
Inspired by the variational problem (7-31), where the test vector v is chosen to be M-orthogonal
to the previous determined eigenmodes, the modied iteration vector

k+1
is chosen to be M-
orthogonal on
(1)
,
(2)
, . . . ,
(m)
, i.e.

(i) T
M

k+1
= 0 , i = 1, . . . , m (872)
(8-71) is premultiplied with
(i) T
M. Assuming that the calculated eigenmodes have been
normalized to unit modal mass, it follows from (6-13), (8-71) and (8-72) that the expansion
coefcients c
1
, c
2
, . . . , c
m
are determined from
0 =
(i) T
M

k+1

j=1
c
j

(i) T
M
(j)
=
(i) T
M

k+1
c
i

c
i
=
(i) T
M

k+1
(873)
After insertion of the calculated expansion coefcients into (8-71),

k+1
is considered as the
estimate to
(m+1)
at the (k + 1)th iteration step. The convergence takes place with the linear
convergence rate r
1
=

m+1

m+2
.
In principle the orthogonalization process need only to be performed after the rst iteration
step, since all succeeding iteration vectors then will be orthogonal to the subspace spanned by

(1)
,
(2)
, . . . ,
(m)
. However, round-off errors inevitable introduce information about the rst
eigenmode. Obviously, the use of this so-called vector deation method becomes increasingly
cumbersome as m increases.
A similar orthogonalization process can be performed in relation to forward vector iteration to
ensure convergence to eigenmodes somewhat lower than the highest.
74 Chapter 8 VECTOR ITERATION METHODS
Box 8.5: Algorithm for inverse vector iteration with Gram-Schmidt orthogonalization
Given start vector
0
, which needs not be normalized to unit modal mass. Repeat the
following items for k = 0, 1, . . .
1. Calculate

k+1
= K
1
M
k
.
2. Orthogonalize iteration vector to previous calculated eigenmodes
(j)
, j = 1, . . . , m

k+1
=

k+1

j=1
c
j

(j)
, c
j
=
(j) T
M

k+1
3. Normalize the orthogonalized iteration vector to unit modal mass

k+1
=

k+1
_

T
k+1
M

k+1
Example 8.6: Inverse and forward vector iteration with Gram-Schmidt orthogonalization
Given the following mass- and stiffness matrices
M=
_

_
2 0 0 0
0 2 0 0
0 0 1 0
0 0 0 1
_

_
, K =
_

_
5 4 1 0
4 6 4 1
1 4 6 4
0 1 4 5
_

_
(874)
Further, assume that the lowest and highest eigenmodes have been determined by inverse and forward vector
iteration

(1)
=
_

_
0.31263
0.49548
0.47912
0.28979
_

_
,
(4)
=
_

_
0.10756
0.25563
0.72825
0.56197
_

_
(875)
Calculate
(2)
by inverse vector iteration with deation, and
(3)
by forward vector iteration with deation. In
both cases the following start vector is used

0
=
_

_
1
1
1
1
_

_
(876)
The matrices Aand B become
8.5 Vector Iteration with Gram-Schmidt Orthogonalization 75
A = K
1
M=
_

_
2.4 3.2 1.4 0.8
3.2 5.2 2.4 1.4
2.8 4.8 2.6 1.6
1.6 2.8 1.6 1.2
_

_
B = M
1
K =
_

_
2.5 2.0 0.5 0.0
2.0 3.0 2.0 0.5
1.0 4.0 6.0 4.0
0.0 1.0 4.0 5.0
_

_
_

_
(877)
At the 1st iteration step in the inverse iteration process towards
(2)
the following calculations are performed
_

1
=
_

_
2.4 3.2 1.4 0.8
3.2 5.2 2.4 1.4
2.8 4.8 2.6 1.6
1.6 2.8 1.6 1.2
_

_
_

_
1
1
1
1
_

_
=
_

_
7.8
11.2
11.8
7.2
_

_
c
1
=
(1) T
M

1
= 24.7067

1
=
_

_
7.8
11.2
11.8
7.2
_

_
24.7067
_

_
0.31263
0.49548
0.47912
0.28979
_

_
=
_

_
0.07595
0.04158
0.03740
0.04016
_

T
1
M

1
= 0.01801

1
=
1

0.01801
_

_
0.07595
0.04158
0.03740
0.04016
_

_
=
_

_
0.56599
0.30989
0.27871
0.29927
_

_
(878)
The results for the iteration vector in the succeeding iteration steps become

2
=
_

_
0.61639
0.14318
0.42383
0.13960
_

3
=
_

_
0.53412
0.02582
0.48439
0.43985
_

_
.
.
.

13
=
_

_
0.44527
0.12443
0.48944
0.57702
_

_
_

_
(879)
The process converged with the indicated digit after 13 iterations.
76 Chapter 8 VECTOR ITERATION METHODS
At the 1st iteration step in the forward iteration process towards
(3)
the following calculations are performed
_

1
=
_

_
2.5 2.0 0.5 0.0
2.0 3.0 2.0 0.5
1.0 4.0 6.0 4.0
0.0 1.0 4.0 5.0
_

_
_

_
1
1
1
1
_

_
=
_

_
1.0
0.5
1.0
2.0
_

_
c
4
=
(4) T
M

1
= 1.38144

1
=

1
c
4

(4)
=
_

_
1.0
0.5
1.0
2.0
_

_
1.38144
_

_
0.10756
0.25563
0.72825
0.56197
_

_
=
_

_
1.14859
0.85314
0.00604
1.22367
_

T
1
M

1
= 5.59161

1
=
1

5.59161
_

_
1.14859
0.85314
0.00604
1.22367
_

_
=
_

_
0.48573
0.36079
0.00256
0.51748
_

_
(880)
The results for the iteration vector in the succeeding iteration steps become

2
=
_

_
0.44542
0.41392
0.02891
0.50962
_

3
=
_

_
0.44063
0.41617
0.02534
0.51445
_

_
.
.
.

9
=
_

_
0.43867
0.41674
0.02322
0.51696
_

_
_

_
(881)
The process converged with the indicated digit after 9 iterations.
Based on the Rayleigh quotient estimates of the obtained eigenmodes the following eigenvalues may be calculated,
cf. (8-2)
=
_

1
0 0 0
0
2
0 0
0 0
3
0
= 0 0
4
_

_
=
_

_
0.09654 0 0 0
0 1.39147 0 0
0 0 4.37355 0
0 0 0 10.6384
_

_
(882)
8.6 Exercises 77
8.6 Exercises
8.1 Given the following mass- and stiffness matrices
M=
_

_
2 0 0
0 2 1
0 1 1
_

_
, K =
_

_
6 1 0
1 4 1
0 1 2
_

_
(a.) Perform two inverse iterations, and then calculate an approximation to
1
.
(b.) Perform two forward iterations, and then calculate an approximation to
3
.
8.2 Given the following mass- and stiffness matrices
M=
_

_
1
2
0 0
0 1 0
0 0
1
2
_

_
, K =
_

_
2 1 0
1 4 1
0 1 2
_

_
The eigenmodes
(1)
are
(3)
are known to be, cf. (6-54)

(1)
=
_

2
2

2
2

2
2
_

_
,
(3)
=
_

2
2

2
2

2
2
_

_
(a.) Calculate
(2)
by means of Gram-Schmidt orthogonalization, and calculate all eigen-
values.
78 Chapter 8 VECTOR ITERATION METHODS
CHAPTER 9
SIMILARITY TRANSFORMATION
METHODS
9.1 Introduction
Iterative similarity transformation methods are based on a sequence of similarity transforma-
tions of the original generalized eigenvalue problem in order to reduce this to a simpler form.
The general form of a similarity transformation is dened by the following coordinate transfor-
mation of the eigenmodes

(j)
= P
(j)
(91)
where Pis the transformation matrix, and
(j)
and
(j)
signify the old and the new coordinates
of the eigenmode. Then, the eigenvalue problem (6-5) may be written

K
(j)
=
j

M
(j)

K = P
T
KP ,

M= P
T
MP
_
_
_
(92)
The eigenvalues
j
are unchanged under a similarity transformation, whereas the eigenmodes
are related by (9-1). In the iteration process the transformation matrix P is determined, so
this matrix converge toward the modal matrix = [
(1)

(2)

(n)
]. On condition that the
eigenmodes have been normalized to unit modal mass, it follows from (6-15) and (6-17) that

K =
T
K = , and

M =
T
M = I. Hence, after convergence of the iteration process
the eigenmodes are stored column-wise in P = , and the eigenvalues are stored in the main
diagonal of the diagonal matrix

K = . By contrast to vector iteration methods similarity
transformation methods determine all eigen-pairs
_

j
,
(j)
_
, j = 1, . . . , n.
The general format of the similarity iteration algorithm has been summarized in Box 9.1.
79
80 Chapter 9 SIMILARITY TRANSFORMATION METHODS
Box 9.1: Iterative similarity transformation algorithm
Let M
0
= M, K
0
= Kand
0
= I. Repeat the following items for k = 0, 1, . . .
1. Calculate appropriate transformation matrix P
k
at the kth iteration step.
2. Calculate updated transformation matrix and transformed mass and stiffness matrices

k+1
=
k
P
k
, M
k+1
= P
T
k
M
k
P
k
, K
k+1
= P
T
k
K
k
P
k
After convergence:
k = K

, m= M

=
_

j
1
0 0
0
j
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
j
n
_

_
= m
1
k , =
_

(j
1
)

(j
2
)

(j
n
)

1
2
Orthonormal transformation matrices fulll, cf. (6-19)
P
1
k
= P
T
k
(93)
For transformation methods operating on the generalized eigenvalue problem, such as the gen-
eral Jacobi iteration method considered in Section 9.2, the transformation matrices P
k
are not
orthonormal, in which case M
k
and K
k
converge towards the diagonal matrices m and k as
given by (6-16) and (6-18). The eigenvalue matrix and the normalized modal matrix are
retrieved as indicated in Box 9.1, where m

1
2
denotes a diagonal matrix with the components
1/
_
M
j
in the main diagonal.
Some similarity transformation algorithms are devised for the special eigenvalue problem, as
is the case for the special Jacobi iteration method in Section 9.1, and the Householder-QR
iteration method in Section 9.3. Hence, application of these methods require an initial similarity
transformation from a GEVP to a SEVP as explained in Section 6.5. This may be achieved by
specifying the transformation matrix of the transformation k = 0 in Box 9.1 as
P
0
= S
1
(94)
where S fullls (6-109). Then, M
1
= I. If the succeeding similarity transformation matrices
are orthonormal, then all transformed mass matrices become identity matrices as seen by induc-
tion from M
k+1
= P
T
k
M
k
P
k
= P
T
k
IP
k
= I. Moreover,
k+1
is orthonormal at each iteration
step, as seen by induction from
T
k+1

k+1
= P
T
k

T
k

k
P
k
= P
T
k
IP
k
= I.
9.2 Special Jacobi Iteration 81
Finally, it should be noticed that after convergence the sequence of eigenvalues in the main diag-
onal of and the eigenmodes in is not ordered in ascending magnitude of the corresponding
eigenvalues as indicated in Box 9.1, where the set of indices (j
1
, j
2
, . . . , j
n
) denotes an arbitrary
permutation of the numbers (1, 2, . . . , n).
9.2 Special Jacobi Iteration
The special Jacobi iteration algorithm operates on the special eigenvalue problem, so M = I
at the outset. The idea is to ensure during the kth transformation that the off-diagonal compo-
nent K
ij,k
, entering the ith and jth row and column of K
k
, becomes zero after the similarity
transformation. The transformation matrix is given as
i j
P
k
=
_

_
1 0 0 0 0 0 0
0 1 0 0 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 cos 0 sin 0 0
0 0 0 1 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 sin 0 cos 0 0
0 0 0 0 0 1 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0 0 0 1
_

_
i
j
(95)
Basically, (9-5) is a identity matrix, where only the components P
ii
, P
ij
, P
ji
and P
jj
are differ-
ing. Obviously, (9-5) is orthonormal. The components of the updated similarity transformation
matrix, and the transformed stiffness matrix become
_

li,k+1
=
li,k
cos +
lj,k
sin , l = 1, . . . , n

lj,k+1
=
lj,k
cos
li,k
sin , l = 1, . . . , n
(96)
_

_
K
ii,k+1
= K
ii,k
cos
2
+ K
jj,k
sin
2
+ 2K
ij,k
cos sin
K
jj,k+1
= K
jj,k
cos
2
+ K
ii,k
sin
2
2K
ij,k
cos sin
K
ij,k+1
=
_
K
jj,k
K
ii,k
_
cos sin + K
ij,k
_
cos
2
sin
2

_
K
li,k+1
= K
il,k+1
= K
li,k
cos + K
lj,k
sin , l = i, j
K
lj,k+1
= K
jl,k+1
= K
lj,k
cos K
li,k
sin , l = i, j
(97)
The remaining components of
k+1
and K
k+1
are identical to those of
k
and K
k
. Hence, only
the ith and jth row and column of K
k
are affected by the transformation.
82 Chapter 9 SIMILARITY TRANSFORMATION METHODS
Box 9.2: Special Jacobi iteration algorithm
Let M
0
= I, K
0
= K and
0
= I. Repeat the following items for the sweeps m =
1, 2, . . .
1. Specify omission criteria
m
in the mth sweep.
2. Check, if the component K
ij,k
in the ith row and jth column of K
k
fullls the criteria

K
2
ij,k
K
ii,k
K
jj,k
<
m
3. If the criteria is fullled, then skip to the next component in the sweep. Else perform
the following calculations
(a.) Calculate the transformation angle from (9-8), and then the transformation
matrix P
k
as given by (9-5).
(b.) Calculate the components of the updated similarity transformation matrix

k+1
=
k
P
k
, and the transformed stiffness matrix K
k+1
= P
T
k
K
k
P
k
from (9-6) and (9-7). Notice that k after the mth sweep is of the magnitude
1
2
(n 1)n m.
After convergence:

= = [
(j
1
)

(j
2
)

(j
n
)
] , K

= =
_

j
1
0 0
0
j
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
j
n
_

_
Next, the angle is determined, so the off-diagonal component K
ij,k+1
becomes equal to zero
K
ij,k+1
=
1
2
_
K
ii,k
K
jj,k
_
sin 2 + K
ij,k
cos 2 = 0
_

_
=
1
2
arctan
_
2K
ij,k
K
ii,k
K
jj,k
_
, K
ii,k
= K
jj,k
=

4
, K
ii,k
= K
jj,k
(98)
Notice, that even though K
ij,k+1
= 0 after the transformation, a subsequent transformation
involving either the ith or jth row or column may reintroduce a non-zero value at this position.
Optimally, K
ij,k
should be selected as the numerically largest off-diagonal component in K
k
.
However, in practice the iteration process is often performed in so-called sweeps, where all
1
2
(n 1)n components above the main diagonal in turn are selected as the critical element to
become zero after the transformation. In this case the method is combined with a criteria for
9.2 Special Jacobi Iteration 83
omission of the similarity transformation, in case the component is numerically small. The
transformation is omitted, if

K
2
ij,k
K
ii,k
K
jj,k
<
m
(99)
where
m
is the omission value in the mth sweep.
Finally, it should be noticed that if K
0
has a banded structure, so non-zero components are
grouped in a band around the main diagonal, the banded structure is not preserved during the
transformation process as seen from Example 9.1, where the initial matrix K
0
is on a three di-
agonal form, whereas the transformed matrix K
1
is full, see (9-11) below.
The special Jacobi iteration algorithm can be summarized as indicated in Box 9.2.
Example 9.1: Special Jacobi iteration
Given a special eigenvalue problem with the stiffness matrix
K = K
0
=
_

_
2 1 0
1 4 1
0 1 2
_

_ ,
0
=
_

_
1 0 0
0 1 0
0 0 1
_

_ (910)
In the 1st sweep the following calculations are performed for (i, j) = (1, 2) :
_

_
=
1
2
arctan
_
2 (1)
2 4
_
= 0.3927
_
cos = 0.9239
sin = 0.3827
P
0
=
_

_
0.9239 0.3827 0
0.3827 0.9239 0
0 0 1
_

1
=
0
P
0
=
_

_
0.9239 0.3827 0
0.3827 0.9239 0
0 0 1
_

_ , K
1
= P
T
0
K
0
P
0
=
_

_
1.5858 0 0.3827
0 4.4142 0.9239
0.3827 0.9239 2
_

_
(911)
84 Chapter 9 SIMILARITY TRANSFORMATION METHODS
Next, the calculations are performed for (i, j) = (1, 3) :
_

_
=
1
2
arctan
_
2 (0.3827)
1.5858 2
_
= 0.5374
_
cos = 0.8591
sin = 0.5119
P
1
=
_

_
0.8591 0 0.5119
0 1 0
0.5119 0 0.8591
_

2
=
1
P
1
=
_

_
0.7937 0.3827 0.4729
0.3287 0.9238 0.1959
0.5119 0 0.8591
_

_ , K
2
= P
T
1
K
1
P
1
=
_

_
1.3578 0.4729 0
0.4729 4.4142 0.7937
0 0.7937 2.2280
_

_
(912)
Finally, to end the 1st sweep the calculations are performed for (i, j) = (2, 3) :
_

_
=
1
2
arctan
_
2 (0.7937)
4.4142 2.2280
_
= 0.3140
_
cos = 0.9511
sin = 0.3089
P
2
=
_

_
1 0 0
0 0.9511 0.3089
0 0.3089 0.9511
_

3
=
2
P
2
=
_

_
0.7937 0.2179 0.5680
0.3287 0.9392 0.0991
0.5119 0.2653 0.8171
_

_ , K
3
= P
T
2
K
2
P
2
=
_

_
1.3578 0.4498 0.1461
0.4498 4.6720 0
0.1461 0 1.9703
_

_
(913)

3
and K
3
represents the estimates of the modal matrix and after the 1st sweep. As seen the K
12,1
= 0,
whereas K
12,2
= 0.4729. This is in agreement with the statement above, that off-diagonal components set to
zero in one iteration, may attain non-zero values in a later iteration. Comparison of K
0
to K
3
shows that the
numerical maximum off-diagonal component has decreased from | 1| = 1 to | 0.4498| after the 1st sweep.
Hence, the algorithm is converging.
At the end of the 2nd and 3rd sweep the following estimates are obtained for the modal matrix and the eigenvalues
_

6
=
_

_
0.6276 0.3258 0.7071
0.4607 0.8876 0.0000
0.6276 0.3258 0.7071
_

_ , K
6
=
_

_
1.2680 0.0039 0.0000
0.0039 4.7320 0
0.0000 0 2.0000
_

9
=
_

_
0.6280 0.3251 0.7071
0.4597 0.8881 0.0000
0.6280 0.3251 0.7071
_

_ , K
9
=
_

_
1.2679 0.0000 0.0000
0.0000 4.7321 0
0.0000 0 2.0000
_

_
(914)
As seen the eigenmodes are stored column-wise in according to the permutation (j
1
, j
2
, j
3
) =
(1, 3, 2).
9.3 General Jacobi Iteration 85
9.3 General Jacobi Iteration
The general Jacobi iteration method operates on the generalized eigenvalue problem, i.e. M=
I. The idea of the transformation is to ensure that during the kth transformation the off-diagonal
component M
ij,k
and K
ij,k
, entering the ith and jth row and column of M
k
and K
k
, simultane-
ous become zero after the similarity transformation.
x
i
1
1
x
j
_
sin
cos
_
_
cos
sin
_
_

1
_
_
1

_
Fig. 91 Projection of ith and jth column vectors of similarity transformation matrix in the (x
i
, x
j
)-plane.
The transformation matrix is given as
i j
P
k
=
_

_
1 0 0 0 0 0 0
0 1 0 0 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 1 0 0 0
0 0 0 1 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 1 0 0
0 0 0 0 0 1 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0 0 0 1
_

_
i
j
(915)
Because we have to specify requirements for both M
ij,k+1
and K
ij,k+1
, we need two free pa-
rameters and in the transformation matrix, where only the angle appears in (9-5). As a
consequence (9-15) is not orthonormal. Actually, the ith and jth column vectors neither have
the length 1 nor are mutual orthogonal, by contrast to the corresponding vectors in (9-5), see
Fig. 9-1. The components of the updated similarity transformation matrix and the transformed
stiffness and mass matrices become
_

li,k+1
=
li,k
+
lj,k
, l = 1, . . . , n

lj,k+1
=
lj,k
+
li,k
, l = 1, . . . , n
(916)
86 Chapter 9 SIMILARITY TRANSFORMATION METHODS
_

_
M
ii,k+1
= M
ii,k
+
2
M
jj,k
+ 2M
ij,k
M
jj,k+1
= M
jj,k
+
2
M
ii,k
+ 2M
ij,k
M
ij,k+1
= M
ii,k
+ M
jj,k
+ M
ij,k
_
1 +
_
M
li,k+1
= M
il,k+1
= M
li,k
+ M
lj,k
, l = i, j
M
lj,k+1
= M
jl,k+1
= M
lj,k
+ M
li,k
, l = i, j
(917)
_

_
K
ii,k+1
= K
ii,k
+
2
K
jj,k
+ 2K
ij,k
K
jj,k+1
= K
jj,k
+
2
K
ii,k
+ 2K
ij,k
K
ij,k+1
= K
ii,k
+ K
jj,k
+ K
ij,k
_
1 +
_
K
li,k+1
= K
il,k+1
= K
li,k
+ K
lj,k
, l = i, j
K
lj,k+1
= K
jl,k+1
= K
lj,k
+ K
li,k
, l = i, j
(918)
The remaining components of
k+1
, M
k+1
and K
k+1
are identical to those of
k
, M
k
and K
k
.
Hence, only the ith and the jth row and columns of K
k
and M
k
are affected by the transforma-
tion.
Next, the parameters and are determined, so the off-diagonal components M
ij,k+1
and
K
ij,k+1
become equal to zero
M
ij,k+1
= M
ii,k
+ M
jj,k
+ M
ij,k
_
1 +
_
= 0
K
ij,k+1
= K
ii,k
+ K
jj,k
+ K
ij,k
_
1 +
_
= 0
_
_
_
(919)
The solution of (9-19) becomes, see Box 9.3
=
1
a
_
1
2

_
1
4
+ ab
_
, =
a
b

a =
K
jj,k
M
ij,k
M
jj,k
K
ij,k
K
ii,k
M
jj,k
M
ii,k
K
jj,k
b =
K
ii,k
M
ij,k
M
ii,k
K
ij,k
K
ii,k
M
jj,k
M
ii,k
K
jj,k
_

_
, if K
ii,k
M
jj,k
= M
ii,k
K
jj,k
=

K
ii,k
M
ij,k
M
ii,k
K
ij,k
K
jj,k
M
ij,k
M
jj,k
K
ij,k
, =
1

, if K
ii,k
M
jj,k
= M
ii,k
K
jj,k
(920)
9.3 General Jacobi Iteration 87
Box 9.3: Proof of equation (9-20)
From (9-19) follows
K
ii,k
+ K
jj,k
M
ii,k
+ M
jj,k
=
K
ij,k
M
ij,k
=
K
jj,k
M
ij,k
M
jj,k
K
ij,k
K
ii,k
M
ij,k
M
ii,k
K
ij,k
(921)
Elimination of in the 1st equation in (9-19) by means of (9-21) provides the following
quadratic equation in
M
ij,k
_
K
jj,k
M
ij,k
M
jj,k
K
ij,k
_

2
M
ij,k
_
K
ii,k
M
jj,k
M
ii,k
K
jj,k
_

M
ij,k
_
K
ii,k
M
ij,k
M
ii,k
K
ij,k
_
= 0 (922)
If K
ii,k
M
jj,k
= M
ii,k
K
jj,k
the coefcient in front of cancels. Then, in combination to
(9-21) the following solutions are obtained for and
=

K
ii,k
M
ij,k
M
ii,k
K
ij,k
K
jj,k
M
ij,k
M
jj,k
K
ij,k
, =
1

(923)
If K
ii,k
M
jj,k
= M
ii,k
K
jj,k
solutions of the quadratic equation for in combination to
(9-21) provides
=
1
a
_
1
2

_
1
4
+ ab
_
, =
a
b
(924)
where a and b are as given in (9-20). Both sign combinations in (9-23) and (9-24) will do.
The transformations are performed in sweeps as for the special Jacobi method. In this case the
criteria for omitting a transformation during the mth sweep may be formulated as

K
2
ij,k
K
ii,k
K
jj,k
+
M
2
ij,k
M
ii,k
M
jj,k
<
m
(925)
where
m
is the omission value in the mth sweep.
The general Jacobi iteration algorithm can be summarized as indicated in Box 9.4.
88 Chapter 9 SIMILARITY TRANSFORMATION METHODS
Box 9.4: General Jacobi iteration algorithm
Let M
0
= M, K
0
= K and
0
= I. Repeat the following items for the sweeps m =
1, 2, . . .
1. Specify omission criteria
m
in the mth sweep.
2. Check, if the components M
ij,k
and K
ij,k
in the ith row and jth column of M
k
K
k
fulll the criteria

K
2
ij,k
K
ii,k
K
jj,k
+
M
2
ij,k
M
ii,k
M
jj,k
<
m
3. If the criteria is fullled, then skip to the next component in the sweep. Else perform
the following calculations
(a.) Calculate the parameters and as given by (9-24), and then the transforma-
tion matrix P
k
as given by (9-15).
(b.) Calculate the components of the updated similarity transformation matrix

k+1
=
k
P
k
, and the transformed mass and stiffness matrices M
k+1
=
P
T
k
M
k
P
k
and K
k+1
= P
T
k
K
k
P
k
from (9-16), (9-17) and (9-18). Notice that k
after the mth sweep is of the magnitude
1
2
(n 1)n m.
After convergence:
k = K

, m= M

=
_

j
1
0 0
0
j
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
j
n
_

_
= m
1
k , = [
(j
1
)

(j
2
)

(j
n
)
] =

1
2
Example 9.2: General Jacobi iteration
Given a generalized eigenvalue problem with the mass and stiffness matrices
M= M
0
=
_

_
0.5 0.5 0
0.5 1 0.5
0 0.5 1
_

_ , K = K
0
=
_

_
2 1 0
1 4 1
0 1 2
_

_ ,
0
=
_

_
1 0 0
0 1 0
0 0 1
_

_ (926)
9.3 General Jacobi Iteration 89
In the 1st sweep the following calculations are performed for (i, j) = (1, 2) :
_

_
_

_
=

2 0.5 0.5 (1)


4 0.5 1 (1)
= 0.7071
=
1
0.7071
= 1.4142
_
NB : K
11,0
M
22,0
= K
22,0
M
11,0
_
P
0
=
_

_
1 1.4142 0
0.7071 1 0
0 0 1
_

_ ,
1
=
0
P
0
=
_

_
1 1.4142 0
0.7071 1 0
0 0 1
_

_
M
1
= P
T
0
M
0
P
0
=
_

_
1.7071 0 0.3536
0 0.5858 0.5
0.3536 0.5 1
_

_ , K
1
= P
T
0
K
0
P
0
=
_

_
2.5858 0 0.7071
0 10.8284 1
0.7071 1 2
_

_
(927)
Next, the calculations are performed for (i, j) = (1, 3) :
_

_
a =
2 0.3536 1 (0.7071)
2.5858 1 1.7071 2
= 1.7071
b =
2.5858 0.3536 1.7071 (0.7071)
2.5858 1 1.7071 2
= 2.5607
_

_
= 0.9664
= 0.6443
P
1
=
_

_
1 0 0.6443
0 1 0
0.9664 0 1
_

_ ,
2
=
1
P
1
=
_

_
1 1.4142 0.6443
0.7071 1 0.4556
0.9664 0 1
_

_
M
2
= P
T
1
M
1
P
1
=
_

_
3.3243 0.4832 0
0.4832 0.5858 0.5
0 0.5 1.2530
_

_ , K
2
= P
T
1
K
1
P
1
=
_

_
3.0869 0.9664 0
0.9664 10.8284 1
0 1 3.9844
_

_
(928)
Finally, to end the 1st sweep the calculations are performed for (i, j) = (2, 3) :
_

_
a =
3.9844 0.5 1.2530 (1)
10.8284 1.2530 0.5858 3.9844
= 0.2889
b =
10.8284 0.5 0.5858 (1)
10.8284 1.2530 0.5858 3.9844
= 0.5341
_

_
= 0.4702
= 0.2543
P
2
=
_

_
1 0 0
0 1 0.2543
0 0.4702 1
_

_ ,
3
=
2
P
2
=
_

_
1 1.1113 1.0039
0.7071 1.2142 0.2012
0.9664 0.4702 1
_

_
M
3
= P
T
2
M
2
P
2
=
_

_
3.3243 0.4832 0.1229
0.4832 0.3926 0
0.1229 0 1.5452
_

_ , K
3
= P
T
2
K
2
P
2
=
_

_
3.0869 0.9664 0.2458
0.9664 12.6498 0
0.2458 0 4.1761
_

_
(929)
At the end of the 2nd and 3rd sweep the following estimates are obtained for the modal matrix and the transformed
mass and stiffness matrices
90 Chapter 9 SIMILARITY TRANSFORMATION METHODS
_

6
=
_

_
0.7494 1.2825 1.0742
0.8195 1.0999 0.2865
1.0376 0.6084 0.9213
_

_
M
6
=
_

_
3.4931 0.0024 0.0000
0.0024 0.3225 0
0.0000 0 1.5517
_

_ , K
6
=
_

_
3.0336 0.0048 0.0000
0.0048 13.029 0
0.0000 0 4.2464
_

9
=
_

_
0.7501 1.2820 1.0742
0.8189 1.1005 0.2865
1.0379 0.6076 0.9213
_

_
M
9
=
_

_
3.4932 0.0000 0.0000
0.0000 0.3225 0
0.0000 0 1.5517
_

_ , K
9
=
_

_
3.0336 0.0000 0.0000
0.0000 13.029 0
0.0000 0 4.2464
_

_
(930)
Presuming that the process has converged after the 3rd sweep the eigenvalues and normalized eigenmodes are next
retrieved by the following calculations, cf. Box. 9.4
_

_
m= M
9
=
_

_
3.4932 0.0000 0.0000
0.0000 0.3225 0
0.0000 0 1.5517
_

_ , m

1
2
=
_

_
0.5350 0 0
0 1.7608 0
0 0 0.8028
_

_
=
_

1
0 0
0
3
0
0 0
2
_

_ = M
1
9
K
9
=
_

_
0.8684 0.0000 0.0000
0.0000 40.395 0.0000
0.0000 0.0000 2.7365
_

_
=
_

(1)

(3)

(2)

=
9
m

1
2
=
_

_
0.4013 2.2573 0.8623
0.4381 1.9378 0.2300
0.5553 1.0698 0.7396
_

_
(931)
The reader should verify that the solution matrices within the indicated accuracy fulll
T
M =
I and
T
K = .
9.4 Householder Reduction
The Householder reduction method operates on the standard eigenvalue problem(SEVP). Hence,
a preliminary similarity transformation of the generalized eigenvalue problem (GEVP) to SEVP
form must be performed as explained in Section 6.5.
The Householder method reduces a symmetric matrix K
1
to three diagonal form by totally n2
consecutive similarity transformation. After the (n 2)th transformation the stiffness matrix
has the form
9.4 Householder Reduction 91
K
n1
=
_

1

1
0 0 0

1

2

2
0 0
0
2

3
0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0
n1

n1
0 0 0
n1

n
_

_
(932)
During the reduction process the numbers
1
, . . . ,
n
and
1
, . . . ,
n1
, as well as the sequence
of transformation matrices P
1
, . . . , P
n2
are determined. Since all transformation matrices be-
come orthonormal all transformed mass matrices remain unit matrices.
After completing the Householder reduction process the standard eigenvalue problem with the
three diagonal matrix K
n1
must be solved by some kind of iteration method, which preserves
the three diagonal structure of the reduced system matrix, and benets from this reduced struc-
ture in order to improve the calculation time. As mentioned in Section 9.2 this requirement
rules out the special Jacobi iteration method. Since, the inverse of a three diagonal matrix is
full, inverse vector iteration with Gram-Schmidt orthogonalization must also be avoided. Of the
methods discussed hitherto only forward vector iteration with Gram-Schmidt orthogonalization
meets the requirement. As wee shall see the requirements are also met by the QR iteration
method to be discussed in Section 9.5. Finally, an initial Householder reduction is favorable in
relation to characteristic polynomial iteration methods discussed in Section 10.4.
The transformation matrix during the kth similarity transformation is given as follows
P
k
= I 2w
k
w
T
k
, |w
k
| = 1 (933)
w
k
denotes a unit column vector to be determined below. Hence, w
T
k
w
k
= 1.
Obviously, P
k
is symmetric, i.e. P
k
= P
T
k
. Moreover, P
k
is orthonormal as seen from the
following derivation
P
k
P
T
k
=
_
I 2w
k
w
T
k
__
I 2w
k
w
T
k
_
=
I 2w
k
w
T
k
2w
k
w
T
k
+ 4
_
w
T
k
w
k
_
w
k
w
T
k
= I
P
T
k
= P
1
k
(934)
As mentioned, this means that the mass matrix remains an identity matrix during the House-
holder similarity transformations, because this is ensured in the initial transformation from a
GEVP to a SEVP, as explained in the remarks subsequent to (9-4).
92 Chapter 9 SIMILARITY TRANSFORMATION METHODS
l
x
P
k
x
w
T
k
x
w
T
k
x
2w
k
(w
T
k
x)
w
k
Fig. 92 Geometrical interpretation of the effect of the Householder transformation matrix.
Consider a given column vector x. Then,
P
k
x =
_
I 2w
k
w
T
k
_
x = x 2
_
w
T
k
x
_
w
k
(935)
Notice that w
T
k
x is a scalar. The transformed vector, P
k
x, may be interpreted as a reection of
x in the line l, which is orthogonal to the vector w
k
and placed in the plane spanned by x and
w
k
as illustrated in Fig. 9-2.
At the kth transformation the vector w
k
has the form
w
k
=
_

_
0
.
.
.
0
w
k+1
.
.
.
w
n
_

_
=
_
_
0
w
k
_
_
_
k rows
_
n k rows
(936)
where
w
T
k
w
k
= w
T
k
w
k
= w
2
k+1
+ + w
2
n
= 1 (937)
Then, the transformation matrix may be written on the following matrix form
k
columns
..
n k
columns
..
P
k
=
_
_

I
nk
0
0

P
k
_
_
_
k rows
_
n k rows
,

P
k
=

I
k
2 w
k
w
T
k
(938)
where

I
k
denotes a unit matrix of dimension (n k) (n k).
9.4 Householder Reduction 93
In order to determine the sub-vector w
k
dening the transformation matrix, the stiffness matrix
before the kth similarity transformation is considered, at which stage the stiffness matrix has
been reduced to three diagonal form down to and including the (k 1)th row and column.
Hence, the stiffness matrix has the structure
n k
columns
k
..
K
k
=
_

1

1
0 0 0 0

1

2

2
0 0 0
0
2

3
0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0
k1

k1
0
0 0 0
k1
K
kk
k
k
0 0 0 0 k
T
k

K
k
_

_
k
_
n k rows
(939)
k
k
is a row vector of the dimension (n k), and

K
k
is a symmetric matrix of the dimension
(n k) (n k) dened as
k
k
= [K
k k+1
K
k k+2
K
kn
] (940)

K
k
=
_

_
K
k+1 k+1
K
k+1 k+2
K
k+1 n
K
k+2 k+1
K
k+2 k+2
K
k+2 n
.
.
.
.
.
.
.
.
.
.
.
.
K
nk+1
K
nk+2
K
nn
_

_
(941)
Then, with the transformation matrix given by (9-38) the stiffness matrix after the kth transfor-
mation becomes
n k
columns
k
..
K
k+1
= P
T
k
K
k
P
k
=
_

1

1
0 0 0 0

1

2

2
0 0 0
0
2

3
0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0
k1

k1
0
0 0 0
k1
K
kk
k
k

P
k
0 0 0 0

P
T
k
k
T
k

P
T
k

K
k

P
k
_

_
k
_
n k rows
(942)
94 Chapter 9 SIMILARITY TRANSFORMATION METHODS
where

k
= K
kk
(943)
k
k

P
k
= k
k
_

I
k
2 w
k
w
T
k
_
= k
k
2
_
k
k
w
k
_
w
T
k
(944)

P
T
k

K
k

P
k
=

K
k
2 w
k
w
T
k

K
k
2

K
k
w
k
w
T
k
+ 4
_
w
T
k

K
k
w
k
_
w
k
w
T
k
(945)
Since, k
k
is a row vector and w
k
is a column vector, k
k
w
k
is a scalar. Similarly, w
T
k

K
k
w
k
becomes a scalar.
If the kth row and column in (9-40) should be on a three-diagonal form, it is required that

P
T
k
k
T
k
=
k
e
k
, e
k
=
_

_
1
0
.
.
.
0
_

_
(946)
where e
k
is a unit column vector of dimension (nk). The transformation matrix is symmetric,
so

P
T
k
k
T
k
=

P
k
k
T
k
. Moreover,

P
k
k
T
k
is a reection of the vector k
T
k
in the line l as depicted in
Fig. 9-2, and hence has the length |k
T
k
|. Hence, it follows that
k
should be selected as

k
= |k
k
| (947)
Then it follows from (9-44) that
k
T
k
2
_
k
k
w
k
_
w
k
= |k
k
| e
k

w
k
= a
_
k
T
k
|k
k
| e
k
_
(948)
where it is noticed that 2
_
k
T
k
w
k
_
is a scalar, which may be absorbed in the coefcient a. a is
determined so the vector w
k
is of unit length.
9.4 Householder Reduction 95
Box 9.5: Householder reduction algorithm
Transform the GEVP to a SEVP by the similarity transformation matrix P =
_
S
1
_
T
,
where S is a solution to M = SS
T
, and dene the initial updated transformation and
stiffness matrices as
K
1
= S
1
K
_
S
1
_
T
,
1
=
_
S
1
_
T
Next, repeat the following items for k = 1, . . . , n 2
1. Calculate the similarity transformation matrix P
k
at the kth similarity transformation
by (9-38), (9-50).
2. Calculate updated transformation and stiffness matrices from (9-42), (9-52)

k+1
=
k
P
k
, K
k+1
= P
T
k
K
k
P
k
After completion of the reduction process the following standard eigenvalue problem is
solved by some iteration method
K
n1
V = V
is the diagonal eigenvalue matrix of the original GEVP, and V is the orthonormal
eigenvector matrix of the three diagonal matrix K
n1
. Then, the eigenmodes normalized
to unit modal mass of the original GEVP are retrieved from the matrix product
=
n1
V
Both sign combinations in (9-47) and (9-48) will do. However, in order to prevent numerical
problems of the algorithm in the case, where k
k
K
k k+1
e
k
the following choice of sign in the
solutions for
k
and w
k
should be preferred

k
= sign(K
k k+1
_
|k
k
| (949)
w
k
=
k
T
k
+ sign(K
k k+1
)|k
k
| e
k

k
T
k
+ sign(K
k k+1
)|k
k
| e
k

(950)
The updated transformation matrix before the kth transformation is partitioned as follows
k
columns
..
n k
columns
..

k
=
_
_

11

12

21

22
_
_
_
k rows
_
n k rows
(951)
96 Chapter 9 SIMILARITY TRANSFORMATION METHODS
With the transformation matrix as given by (9-38) the transformation matrix after the kth trans-
formation becomes
k
columns
..
n k
columns
..

k+1
=
_
_

11

12

P
k

21

22

P
k
_
_
_
k rows
_
n k rows
(952)
Finally, it should be noticed that alternative algorithms for reduction to three diagonal form have
been indicated by Givens and Lanczos.
Example 9.3: Householder reduction
Given a generalized eigenvalue problem with the mass and stiffness matrices given by (8-74). The similarity
transformation matrix transforming from a GEVP to a SEVP becomes
S = M
1
2
=
_

2 0 0 0
0

2 0 0
0 0 1 0
0 0 0 1
_

_
S
1
=
_

2
2
0 0 0
0

2
2
0 0
0 0 1 0
0 0 0 1
_

_
(953)
Then, the stiffness matrix and updated transformation matrix before the 1st Householder similarity transformation
becomes, cf. (6-112), (6-113)
K
1
= S
1
K
_
S
1
_
T
=
_

2
2
0 0 0
0

2
2
0 0
0 0 1 0
0 0 0 1
_

_
_

_
5 4 1 0
4 6 4 1
1 4 6 4
0 1 4 5
_

_
_

2
2
0 0 0
0

2
2
0 0
0 0 1 0
0 0 0 1
_

_

K
1
=
_

_
5
2
2

2
2
0
2 3 2

2
2
2
2
2

2 6 4
0

2
2
4 5
_

1
=
_

2
2
0 0 0
0

2
2
0 0
0 0 1 0
0 0 0 1
_

_
_

_
(954)
At the Householder transformation k = 1 one has
_

1
=
5
2
k
1
=
_
2

2
2
0
_
, |k
1
| =
3

2
2
(955)
Then, cf. (9-38), (9-49) and (9-50)
9.4 Householder Reduction 97
_

1
= sign(2)
3

2
2
=
3

2
2
= 2.1213
w
1
= a
_
_
_
_

_
2

2
2
0
_

_ + sign(2)
3

2
2
_

_
1
0
0
_

_
_
_
_ = a
_

_
2
3

2
2
2
2
0
_

_ w
1
=
_

_
0.9856
0.1691
0
_

P
1
=

I
1
2 w
1
w
T
1
=
_

_
0.9828 0.3333 0
0.3333 0.9428 0
0 0 1
_

_
(956)
The stiffness matrix and updated transformation matrix after the Householder transmission k = 1 becomes
_

_
K
2
= P
T
1
K
1
P
1
=
_

_
2.5000 2.1213 0 0
2.1213 5.1111 3.1427 2.0000
0 3.1427 3.8889 3.5355
0 2.0000 3.5355 5.000
_

2
=
1
P
1
=
_

_
0.7071 0 0 0
0 0.6667 0.2357 0
0 0.3333 0.9428 0
0 0 0 1
_

_
(957)
where the transformed matrices are calculated by means of (9-42) and (9-52), respectively.
At the Householder transformation k = 2 the following calculations are performed
_
_
_

2
= 5.1111
k
2
= [3.1427 2.0000] , |k
2
| = 3.7251
(958)
_

2
= sign(3.1427) 3.7251 = 3.7251
w
2
= a
__
3.1427
2.0000
_
+ sign(3.1427) 3.7251
_
1
0
__
= a
_
6.8678
2.0000
_
w
2
=
_
0.9601
0.2796
_

P
2
=

I
2
2 w
2
w
T
2
=
_
0.8436 0.5369
0.5369 0.8436
_
(959)
The stiffness matrix and updated transformation matrix after the Householder transmission k = 2 becomes
_

_
K
3
= P
T
2
K
2
P
2
=
_

_
2.5000 2.1213 0 0
2.1213 5.1111 3.7251 0.0000
0 3.7251 7.4120 2.0005
0 0.0000 2.0005 1.4769
_

3
=
2
P
2
=
_

_
0.7071 0 0 0
0 0.6667 0.1988 0.1265
0 0.3333 0.7954 0.5062
0 0 0.5369 0.8436
_

_
(960)
98 Chapter 9 SIMILARITY TRANSFORMATION METHODS
The reader should verify that the solution matrices within the indicated accuracy fulll
T
3
M
3
= I and

T
3
K
3
= K
3
.
9.5 QR Iteration
As is the case for the Householder reduction method QR-iteration operates on the standard
eigenvalue problem, so an initial similarity transformation of the GEVP to a SEVP is presumed.
Let K
1
= S
1
K
_
S
1
_
T
denote the stiffness matrix after the initial similarity transformation,
where S is a solution to M= SS
T
, cf. (6-109), (6-112).
QRiteration is based on the following property that any non-singular matrix Kcan be factorized
on the following form
K = QR (961)
where Q is an orthonormal matrix, and R is an upper triangular matrix. Hence, Q and R have
the form
Q =
_
q
1
q
2
q
n

, q
T
k
q
j
=
kj
(962)
R =
_

_
r
11
r
12
r
13
r
14
r
1n
0 r
22
r
23
r
24
r
2n
0 0 r
33
r
34
r
3n
0 0 0 r
44
r
4n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0 r
nn
_

_
(963)
where
ij
denotes Kroneckers delta. It should be noticed that the factorization (9-61) holds
even for non-symmetric matrices. The orthonormality of Q, which implies that Q
1
= Q
T
, is
essential to the method.
Based on K
1
a sequence of transformed stiffness matrices K
k
are next constructed with the QR
factors Q
k
and R
k
according to the algorithm
K
k
= Q
k
R
k
K
k+1
= Q
T
k
K
k
Q
k
= Q
T
k
Q
k
R
k
Q
k
= R
k
Q
k
_
_
_
(964)
Hence, K
k+1
is obtained by a similarity transformation with the transformation matrix Q
k
. The
transformation is reduced to a evaluation of R
k
Q
k
due to the orthonormality property of Q
k
.
For the same reason all transformed mass matrices remain unit matrices.
9.5 QR Iteration 99
Box 9.6: Proof of equation (9-61)
Let k
1
k
2
, . . . , k
n
denote the column vectors of the matrix K, i.e.
K =
_
k
1
k
2
k
n

(965)
Since Kis non-singular, k
1
k
2
, . . . , k
n
are linearly independent, and hence form a vector
basis. A new orthonormal vector basis q
1
q
2
q
n
linearly dependent on k
1
k
2
, . . . , k
n
may then be constructed by a process, which resembles the Gram-Schmidt orthogonaliza-
tion described in Section 8.5. (9-61) is identical to the following relations
_

_
k
1
= r
11
q
1
k
2
= r
12
q
1
+ r
22
q
2
.
.
.
k
j
= r
1j
q
1
+ r
2j
q
2
+ + r
jj
q
j
=
j

k=1
r
kj
q
k
.
.
.
k
n
=
n

k=1
r
kn
q
k
(966)
(9-66) is solved sequentially downwards using the properties of orthonormality of q
j
.
From the 1st equation follows by scalar multiplication with q
1
r
11
= |k
1
| q
1
=
1
r
11
k
1
(967)
Now, q
1
and r
11
are known. Scalar multiplication of the 2nd equation with q
1
, and use of
the orthogonality property q
T
1
q
2
= 0 provides
r
12
= q
T
1
k
2
r
22
= |k
2
r
12
q
1
| q
2
=
1
r
22
_
k
2
r
12
q
1
_
(968)
At the determination of q
j
, 1 < j n, the mutually ortonormal basis vectors
q
1
, q
2
, . . . , q
j1
have already been determined. Scalar multiplication of the jth equation
with q
k
, k = 1, 2, . . . , j 1, and use of the orthogonality property q
T
k
q
j
= 0 provides
r
kj
= q
T
k
k
j
r
jj
=

k
j

j1

k=1
r
kj
q
k

q
j
=
1
r
jj
_
k
j

j1

k=1
r
kj
q
k
_
(969)
Hence a solution fullling all requirements has been obtained for the components r
kj
of
Rand the column vectors q
j
of Q, which proves the validity of the factorization (9-61).
100 Chapter 9 SIMILARITY TRANSFORMATION METHODS
Box 9.7: QR iteration algorithm
Transform the GEVP to a SEVP by the similarity transformation matrix P =
_
S
1
_
T
,
where S is a solution to M = SS
T
, and dene the initial updated transformation and
stiffness matrices as
K
1
= S
1
K
_
S
1
_
T
,
1
=
_
S
1
_
T
Repeat the following items for k = 1, 2, . . .
1. Perform a QR factorization of the stiffness matrix before the kth similarity transfor-
mation
K
k
= Q
k
R
k
2. Calculate updated transformation and stiffness matrices by a similarity transforma-
tion with the orthonormal transformation matrix Q
k

k+1
=
k
Q
k
, K
k+1
= Q
T
k
K
k
Q
k
= R
k
Q
k
After convergence:
=
_

n
0 0
0
n1
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
1
_

_
= K

= R

, = [
(n)

(n1)

(1)
] =

Now, it can be proved that


K

= R

= =
_

n
0 0
0
n1
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
1
_

_
,

= =
_

(n)

(n1)

(1)

(970)
Q
k
converge to a unit matrix, as a consequence of K

= R

.
As seen, at convergence the eigen-pairs are order in descending order of the eigenvalues. More-
over, the algorithm converges faster to the lowest eigenmode than to the largest, as is the case
for subspace iteration as describes in Section 10.3, a method which has some resemblance to
QR iteration. The rate of convergence seems to be rather comparable to that of subspace iter-
ation. These properties have been illustrated in Example 9.4 below. The proof of convergence
and the associated determination of the convergence rate is rather tedious and involved, and will
be omitted here.
9.5 QR Iteration 101
The general QR iteration algorithm can be summarized as indicated in Box 9.7.
Usually, the QR algorithm becomes computational expensive when applied to large full ma-
trices, due to the time consuming orthogonalization process involved in the QR factorization.
However, if K
k
is on the three diagonal form (9-32), it can be shown that matrices R
k
and Q
k
have the form
R
k
=
_

_
r
11
r
12
r
13
0 0 0
0 r
22
r
23
r
24
0 0
0 0 r
33
r
34
r
35
0
0 0 0 r
44
r
45
0
0 0 0 0 r
55
0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0 0 r
nn
_

_
(971)
Q
k
=
_

_
q
11
q
12
q
13
q
14
q
15
q
1n
q
21
q
22
q
23
q
24
q
25
q
2n
0 q
32
q
33
q
34
q
35
q
3n
0 0 q
42
q
44
q
45
q
4n
0 0 0 q
54
q
55
q
5n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0 0 q
nn
_

_
(972)
Hence, R
k
becomes an upper three diagonal matrix with only 3n 3 nontrivial coefcients r
jk
versus
1
2
n(n + 1) for a full matrix K
k
. Similarly, Q
k
contains zeros below the rst lower
diagonal. As a consequence of the indicated structure of R
k
and Q
k
, the matrix product
K
k+1
= R
k
Q
k
will again be a symmetric three diagonal matrix. Hence, this property is pre-
served for the transformed stiffness matrices during the iteration process. This motivates the
application of QR iteration in combination to an initial Householder reduction of the initial
generalized eigenvalue problem to three diagonal form, which is known as the HOQR method.
Example 9.4: HOQR iteration
QR iteration is performed on the stiffness matrix of Example 9.3, which has been reduced to three diagonal form
by Householder reduction. Hence, the initial stiffness matrix and updated transformation matrix reads, cf. (9-60)
K
1
=
_

_
2.5000 2.1213 0 0
2.1213 5.1111 3.7251 0.0000
0 3.7251 7.4120 2.0005
0 0.0000 2.0005 1.4769
_

_
,
1
=
_

_
0.7071 0 0 0
0 0.6667 0.1988 0.1265
0 0.3333 0.7954 0.5062
0 0 0.5369 0.8436
_

_
(973)
At the determination of q
1
and r
11
in the 1st QR iteration the following calculations are performed, cf. (9-67)
102 Chapter 9 SIMILARITY TRANSFORMATION METHODS
_

_
k
1
=
_

_
2.5000
2.1312
0
0
_

_
, r
11
=

_
2.5000
2.1312
0
0
_

= 3.2787
q
1
=
1
3.2787
_

_
2.5000
2.1312
0
0
_

_
=
_

_
0.7625
0.6470
0
0
_

_
(974)
q
2
and r
12
, r
22
are determined from the following calculations, cf. (9-68)
_

_
k
2
=
_

_
2.1213
5.1111
3.7251
0
_

_
, r
12
=
_

_
0.7625
0.6470
0
0
_

_
T
_

_
2.1213
5.1111
3.7251
0
_

_
= 4.9244
r
22
=

_
2.1213
5.1111
3.7251
0
_

_
4.9244
_

_
0.7625
0.6470
0
0
_

= 4.5001
q
2
=
1
4.5001
_
_
_
_
_
_

_
2.1213
5.1111
3.7251
0
_

_
4.9244
_

_
0.7625
0.6470
0
0
_

_
_
_
_
_
_
=
_

_
0.3630
0.4278
0.8278
0
_

_
(975)
q
3
and r
13
, r
23
, r
33
are determined from the following calculations, cf. (9-69)
_

_
k
3
=
_

_
0
3.7251
7.4120
2.0005
_

_
, r
13
= q
T
1
k
3
= 2.4101 , r
23
= q
T
2
k
3
= 7.7292
r
33
=

k
3
+ 2.4101q
1
+ 7.7292q
2

= 2.6959
q
3
=
1
2.6959
_
k
3
+ 2.4101q
1
+ 7.7292q
2
_
=
_

_
0.3590
0.4231
0.3761
0.7421
_

_
(976)
9.5 QR Iteration 103
Finally, q
4
and r
14
, r
24
, r
34
, r
44
are determined from the following calculations, cf. (9-69)
_

_
k
4
=
_

_
0
0
2.0005
1.4769
_

_
, r
14
= q
T
1
k
4
= 0 , r
24
= q
T
2
k
4
= 1.6560
r
34
= q
T
3
k
4
= 1.8483 , r
44
=

k
4
0q
1
+ 1.6560q
2
1.8483q
3

= 0.1571
q
4
=
1
0.1571
_
k
4
0q
1
+ 1.6560q
2
1.8483q
3
_
=
_

_
0.3974
0.4684
0.4163
0.6703
_

_
(977)
Then, at the end of the 1st iteration the following matrices are obtained
_

_
Q
1
=
_

_
0.7625 0.3630 0.3590 0.3974
0.6470 0.4278 0.4231 0.4684
0 0.8278 0.3761 0.4163
0 0 0.7421 0.6703
_

_
R
1
=
_

_
3.2787 4.9244 2.4101 0
0 4.5001 7.7292 1.6560
0 0 2.6959 1.8483
0 0 0 0.1571
_

_
_

2
=
1
Q
1
=
_

_
0.5392 0.2567 0.2539 0.2810
0.4313 0.1206 0.2629 0.4799
0.2157 0.8010 0.2175 0.5143
0 0.4444 0.8280 0.3420
_

_
K
2
= R
1
Q
1
=
_

_
5.6860 2.9115 0 0
2.9115 8.3232 2.2317 0
0 2.2317 2.3854 0.1166
0 0 0.1166 0.1053
_

_
_

_
(978)
As seen the matrices R
1
and Q
1
have the structure (9-71) and (9-72). Additionally, K
2
has the same three diagonal
structure as K
1
. The corresponding matrices after the 2nd and 3rd iteration become
104 Chapter 9 SIMILARITY TRANSFORMATION METHODS
_

_
Q
2
=
_

_
0.8901 0.4279 0.1566 0.0117
0.4558 0.8356 0.3058 0.0229
0 0.3445 0.9362 0.0702
0 0 0.0748 0.9972
_

_
R
2
=
_

_
6.3881 6.3850 1.0171 0
0 6.4780 2.6866 0.0402
0 0 1.5595 0.1170
0 0 0 0.0968
_

_
_

3
=
2
Q
2
=
_

_
0.3629 0.3577 0.3795 0.3103
0.4389 0.1744 0.1796 0.4947
0.5570 0.5021 0.4533 0.4818
0.2026 0.6566 0.6648 0.2931
_

_
K
3
= R
2
Q
2
=
_

_
8.5962 2.9525 0 0
2.9525 6.3386 0.5372 0
0 0.5372 1.4687 0.0072
0 0 0.0072 0.0966
_

_
_

_
(979)
_

_
Q
3
=
_

_
0.9458 0.3230 0.0345 0.0002
0.3248 0.9404 0.1003 0.0005
0 0.1061 0.9943 0.0051
0 0 0.0051 1.0000
_

_
R
3
=
_

_
9.0891 4.8514 0.1745 0
0 5.0643 0.6610 0.0008
0 0 1.4065 0.0077
0 0 0 0.0965
_

_
_

4
=
3
Q
3
=
_

_
0.2270 0.4134 0.4242 0.3125
0.3584 0.3248 0.1434 0.4954
0.6899 0.2442 0.4844 0.4793
0.4049 0.6226 0.6036 0.2900
_

_
K
4
= R
3
Q
3
=
_

_
10.172 1.6451 0 0
1.6451 4.8328 0.1492 0
0 0.1492 1.3986 0.0005
0 0 0.0005 0.0965
_

_
_

_
(980)
As seen fromR
3
and K
4
the terms in the main diagonal have already after the 3rd iteration grouped in descending
magnitude, corresponding to the ordering of the eigenvalues at convergence indicated in Box 9.7. Moreover, for
both matrices convergence to the lowest eigenvalue
1
= 0.0965 has occurred, illustrating the fact that the QR
algorithm converge faster to the lowest eigenmode than to the highest.
9.5 QR Iteration 105
The matrices after the 14th iteration become
_

_
Q
14
=
_

_
1.0000 0.0000 0.0000 0.0000
0.0000 1.0000 0.0000 0.0000
0 0.0000 1.0000 0.0000
0 0 0.0051 1.0000
_

_
R
14
=
_

_
10.638 0.0003 0.0000 0
0.0000 4.3735 0.0000 0.0008
0 0 1.3915 0.0077
0 0 0 0.0965
_

_
_

15
=
14
Q
14
=
_

_
0.1076 0.4387 0.4453 0.3126
0.2556 0.4167 0.1244 0.4955
0.7283 0.0232 0.4894 0.4791
0.5620 0.5170 0.5770 0.2898
_

_
K
15
= R
14
Q
14
=
_

_
10.638 0.0001 0 0
0.0001 4.3735 0.0000 0
0 0.0000 1.3915 0.0000
0 0 0.0000 0.0965
_

_
_

_
(981)
Presuming that convergence has occurred after the 14th iteration the following solutions are obtained for the eigen-
values and eigenmodes of the original general eigenvalue problem
=
_

4
0 0 0
0
3
0 0
0 0
2
0
0 0 0
1
_

_
= K
15
=
_

_
10.638 0 0 0
0 4.3735 0 0
0 0 1.3915 0
0 0 0 0.0965
_

_
=
_

(4)

(3)

(2)

(1)

=
15
=
_

_
0.1076 0.4387 0.4453 0.3126
0.2556 0.4167 0.1244 0.4955
0.7283 0.0232 0.4894 0.4791
0.5620 0.5170 0.5770 0.2898
_

_
_

_
(982)
The reader should verify that the solution matrices within the indicated accuracy fulll
T
M = I and
T
K =
, where Mand Kare the mass and stiffness matrices given by (8-74). (9-82) agrees with the results (8-75), (8-
79), (8-81) and (8-82) in Example 8.6.
106 Chapter 9 SIMILARITY TRANSFORMATION METHODS
9.6 Exercises
9.1 Given a symmetric K.
(a.) Write a MATLAB program, which performs special Jacobi iteration.
9.2 Given the symmetric matrices Mand K.
(a.) Write a MATLAB program, which performs general Jacobi iteration.
9.3 Given the following mass- and stiffness matrices
M=
_

_
2 0 0
0 2 1
0 1 1
_

_
, K =
_

_
6 1 0
1 4 1
0 1 2
_

_
(a.) Perform an initial transformation to a special eigenvalue problem, and calculate the
eigenvalues and eigenvectors by means of standard Jacobi iteration.
(b.) Calculate the eigenvalues and normalized eigenvectors by means of general Jacobi
iteration operating on the original general eigenvalue problem.
9.4 Given the symmetric matrices M and K of dimension n 3.
(a.) Write a MATLAB program, which performs a Householder reduction to three diago-
nal form.
9.5 Given the symmetric matrices Mand K.
(a.) Write a MATLAB program, which performs QR iteration.
9.6 Consider the mass- and stiffness matrices dened in Exercise 9.3. after the transformation
to the special eigenvalue problem.
(a.) Calculate the eigenvalues and normalized eigenvectors by means of QR iteration.
CHAPTER 10
SOLUTION OF LARGE EIGENVALUE
PROBLEMS
10.1 Introduction
In civil engineering large numerical models with n = 10
4
10
6
degrees of freedom have be-
come common practise along with the development of computer technology. However, most
natural and man made loads such as wind, waves, earthquakes and trafc have spectral contents
in the low frequency range. As a consequence only a relatively small number n
1
n of the
lowest structural modes will contribute to the global structural dynamic response. In this chap-
ter methods will be discussed, which have been devised with this specic fact in mind.
Sections 10.2 and 10.3 deals with simultaneous inverse vector iteration and socalled subspace
iteration, respectively. In both cases a sequence of subspaces are dened, each of which are
spanned by a specic system of basis vectors. The idea is that these subspaces at the end of the
iteration process contains the n
1
lowest eigenmodes
(1)
,
(2)
, . . . ,
(n
1
)
of the general eigen-
value problem (6-5). These eigenvalue problems may be assembled on the following matrix
form, cf. (6-10), (6-11), (6-12)
K[
(1)

(2)

(n
1
)
] = M[
(1)

(2)

(n
1
)
]
_

1
0 0
0
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
n
1
_

_

K = M (101)
=
_

1
0 0
0
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
n
1
_

_
(102)
By contrast to the formulation in Chapter 6 the modal matrix is no longer quadratic, but has
the dimension n n
1
, dened as
107
108 Chapter 10 SOLUTION OF LARGE EIGENVALUE PROBLEMS
=
_

(1)

(2)

(n
1
)

(103)
V
0
V

(1)
0

(2)
0

(1)

=
(1)

(2)

=
(2)
Fig. 101 Principle of subspace iteration.
The principle of iterating through a sequence of subspaces has been illustrated in Fig. 10-1. V
0
denotes a start subspace, which is spanned by the start basis
0
=
_

(1)
0

(2)
0
_
. The iteration
process passes through a sequence of subspaces V
1
, V
2
, . . ., where V
k
is spanned by the basis

k
=
_

(1)
k

(2)
k
_
. At convergence,

=
_

(1)


(2)

_
=
_

(1)

(2)

is spanning the limiting


subspace V

containing the eigenmodes searched for.


Simultaneous inverse inverse vector iteration is a generalization of the inverse vector iteration
and inverse vector iteration with deation described in Sections 8.2 and 8.5. The start vector
basis converges towards a basis made up of the wanted eigenmodes as shown in Fig. 10-1.
The subspace iteration method and socalled subspace iteration described in Section 10.2 is in
principle a sequence of Rayleigh-Ritz analyses, where the Ritz base vectors are forced to con-
verge to each of the eigenmodes. As a consequence, if the start basis contains the n
1
eigenmodes
the subspace iteration converge in a single step as described in Section 7.2, which is generally
not the case for simultaneous inverse vector iteration. Being based on a convergence of a se-
quence of vector bases both methods are in fact subspace iteration methods, although this name
has been coined solely for the latter method. A more informative name for this method would
probably be Rayleigh-Ritz iteration.
Section 10.4 deals with characteristic polynomial iteration methods, which operates on the
characteristic equation (6-6). These methods form an alternative to inverse or forward vector
iteration with deation in case some specic eigenmode different from the smallest or largest
is searched for. To be numerical effective these methods require that the generalized eigenvalue
10.2 Simultaneous Inverse Vector Iteration 109
problem has been reduced to a standard eigenvalue problem on three diagonal form, such as the
Householder reduction described in Section 9.4. Polynomial methods may be based either on
the numerical iteration of the characteristic polynomial directly, or based on a Sturm sequence
iteration. Even in the rst mentioned case a Sturm sequence check should be performed after
the calculation to verify that the calculated n
1
eigenmodes are indeed the lowest.
It should be noticed that some problems in structural dynamics, such as acoustic transmission
and noise emission, are governed by high frequency structural response. Additional to the nu-
merical problems in calculating these modes, lack of accuracy of the underlying mechanical
models in the high-frequency range adds to the problems in using modal analysis in such high
frequency cases.
10.2 Simultaneous Inverse Vector Iteration
Let
0
=
_

(1)
0

(2)
0

(n
1
)
0
_
denote n
1
arbitrary linearly independent vectors, which span
an n
1
dimensional start subspace. Next, the algorithm for simultaneous inverse vector iteration
takes place according to the algorithm

k+1
= A
k
, k = 0, 1, . . . (104)
where A = K
1
M, cf. (8-4). (10-4) is identical to the inverse vector iteration algorithm de-
scribed by (8-4). The only difference is that now n
1
vectors are simultaneous iterated.
At convergence the iterated base vectors obtained from (10-4) will span an n
1
-dimensional sub-
space containing the n
1
lowest eigenmodes. However, due to the inherent properties of the
inverse vector iteration algorithm all the iterated base vectors tend to become mutually parallel,
and parallel to the lowest eigenmode
(1)
. Hence, the vector basis becomes more and more ill
conditioned. For the case shown on Fig. 10-1 this means that the subspace V
k
will converge to
the limit plane V

, but the iterated base vectors


(1)
k
and
(2)
k
become more and more parallel.
In order to prevent this the method is combined with a Gram-Schmidt orthogonalization pro-
cedure. Similar to the QR factorization procedure described in Box 9.6 the iterated basis

k+1
can be written on the following factorized form

k+1
=
k+1
R
k+1
(105)
where
k+1
is an M-orthonormal basis in the iterated subspace, and R
k+1
is an upper triangular
matrix. Hence,
k+1
and R
k+1
have the properties

k+1
=
_

(1)
k+1

(2)
k+1

(n
1
)
k+1
_
,
(i) T
k+1
M
(j)
k+1
=
ij
(106)
110 Chapter 10 SOLUTION OF LARGE EIGENVALUE PROBLEMS
R
k+1
=
_

_
r
11
r
12
r
13
r
1n
1
0 r
22
r
23
r
2n
1
0 0 r
33
r
3n
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 r
n
1
n
1
_

_
(107)
The M-orthonormal base vectors
k+1
=
_

(1)
k+1

(2)
k+1

(n
1
)
k+1
_
spanning the iterated subspace
V
k+1
, as well as the components of the triangular matrix R
k+1
, are determined sequentially in
much the same way as the determination of the matrices Q and R in the QR factorization
described by (9-66)-(9-69). At rst it is noticed that (10-5) is identical to the following relations
_

(1)
k+1
= r
11

(1)
k+1

(2)
k+1
= r
12

(1)
k+1
+ r
22

(2)
k+1
.
.
.

(j)
k+1
= r
1j

(1)
k+1
+ r
2j

(2)
k+1
+ + r
jj

(j)
k+1
=
j

i=1
r
ij

(i)
k+1
.
.
.

(n
1
)
k+1
=
n
1

i=1
r
in
1

(i)
k+1
(108)
(10-8) is solved sequentially downwards using the M-orthonormality of the already determined
base vectors
(j)
k+1
. The details of the derivation has been given in Box 10.1.
After convergence the eigenvalues are obtained from the Rayleigh quotients evaluated with the
calculated eigenvectors, cf. (7-25). Since each of the n
1
eigenmodes have been normalized to
unit modal mass the quotients become

j
=
(j) T
K
(j)
, j = 1, . . . , n
1
(109)
The Rayleigh quotients in (10-9) may be assembled in the following matrix equation
=
T
K (1010)
where
=
_

1
0 0
0
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
n
1
_

_
, =
_

(1)

(2)

(n
1
)

(1011)
10.2 Simultaneous Inverse Vector Iteration 111
It can be proved that the upper triangular matrix R
k+1
converges towards the diagonal matrix

1
. Although the Rayleigh quotients (10-10) provides more accurate estimates, the eigenval-
ues may then as an alternative be retrieved from
= R
1

(1012)
Box 10.1: M-orthonormalization of iterated basis
Evaluating the modal mass on both sides of the 1st equation of (10-8) provides
r
11
=
_
_

(1)
k+1
_
_

(1)
k+1
=
1
r
11

(1)
k+1
(1013)
where the norm
_
_

(1)
k+1
_
_
represents the square root of the modal mass of

(1)
k+1
dened as
_
_

(1)
k+1
_
_
=
_

(1) T
k+1
M

(1)
k+1
_1
2
(1014)
Now,
(1)
k+1
and r
11
are known. Scalar pre-multiplication of the 2nd equation
with
(1) T
k+1
M, and use of the orthonormality properties
(1) T
k+1
M
(2)
k+1
= 0 and

(1) T
k+1
M
(1)
k+1
= 1, provides
r
12
=
(1) T
k+1
M

(2)
k+1
r
22
=
_
_

(2)
k+1
r
12

(1)
k+1
_
_

(2)
k+1
=
1
r
22
_

(2)
k+1
r
12

(1)
k+1
_
(1015)
At the determination of
(j)
k+1
, 1 < j n
1
, the mutually ortonormal basis vectors

(1)
k+1
,
(2)
k+1
, . . . ,
(j1)
k+1
have already been determined. Scalar pre-multiplication of the
jth equation with
(i) T
k+1
M, i = 1, 2, . . . , j 1, and use of the orthogonality property

(i) T
k+1
M
(j)
k+1
= 0 provides
r
ij
=
(i) T
k+1
M

(j)
k+1
r
jj
=
_
_
_
_
_

(j)
k+1

j1

i=1
r
ij

(i)
k+1
_
_
_
_
_

(j)
k+1
=
1
r
jj
_

(j)
k+1

j1

i=1
r
ij

(i)
k+1
_
(1016)
It is characteristic for simultaneous inverse vector method in contrast to the subspace iteration
method described in Section 10.3, that eigenmodes which at one level of the iteration process
112 Chapter 10 SOLUTION OF LARGE EIGENVALUE PROBLEMS
is contained in the iterated subspace, may move out of the iterated subspace at later levels as
illustrated in Example 10.1.
Box 10.2: Simultaneous inverse vector iteration algorithm
Given the n
1
-dimensional start vector basis
0
=
_

(1)
0

(2)
0

(n
1
)
0
_
. The base vectors
must be linearly independent, but need not be normalized to unit modal mass. Repeat the
following items for k = 0, 1, . . .
1. Perform simultaneous inverse vector iteration:

k+1
= A
k
, A = K
1
M
2. Perform Gram-Schmidt orthogonalization Gram-Schmidt orthogonalization to
obtain a new M-orthonormal iterated vector basis
k+1
as explained by (10-12)-
(10-16) corresponding to the factorization:

k+1
=
k+1
R
k+1
After convergence has been achieved the eigenvalues and eigenmodes normalized to unit
modal mass are obtained from:
=
_

1
0 0
0
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
n
1
_

_
=
T

= R
1

, =
_

(1)

(2)

(n
1
)

As for all kind of inverse vector iteration methods the convergence rate of the iteration vector is
linear in the quantity
r
1
= max
_

2
,

2

3
, . . . ,

n
1

n
1
+1
_
(1017)
Correspondingly, the Rayleigh quotients (10-9) have quadratic convergence rate r
2
= r
2
1
.
The simultaneous inverse vector iteration algorithm always converges towards the lowest n
1
eigenmodes. Hence, no Sturm sequence check is needed to ensure that these modes have in-
deed been calculated. Further, the rate of convergence seems to be comparable for all modes
10.2 Simultaneous Inverse Vector Iteration 113
contained in the subspace, as demonstrated in Example 10.1 below.
The simultaneous inverse vector iteration algorithm may be summarized as indicated in Box
10.2.
Example 10.1: Simultaneous inverse vector iteration
Consider the generalized eigenvalue problem dened in Example 6.2. Calculate the two lowest eigenmodes and
corresponding eigenvalues by simultaneous inverse vector iteration with the start vector basis

0
=
_

(1)
0

(2)
0
_
=
_

_
0 2
1 1
2 0
_

_ (1018)
The matrix A becomes, cf. (6-44)
A = K
1
M=
_

_
2 1 0
1 4 1
0 1 2
_

_
1
_

_
1
2
0 0
0 1 0
0 0
1
2
_

_ =
_

_
0.2917 0.1667 0.0417
0.0833 0.3333 0.0833
0.0417 0.1667 0.2917
_

_ (1019)
Then, the 1st iterated vector basis becomes, cf. (10-4)

1
=
_

(1)
1

(2)
1
_
= A
0
=
_

_
0.2917 0.1667 0.0417
0.0833 0.3333 0.0833
0.0417 0.1667 0.2917
_

_
_

_
0 2
1 1
2 0
_

_ =
_

_
0.2500 0.7500
0.5000 0.5000
0.7500 0.2500
_

_ (1020)
At the determination of
(1)
1
and r
11
in the 1st vector iteration the following calculations are performed, cf. (10-13)
_

(1)
1
=
_

_
0.2500
0.5000
0.7500
_

_ , r
11
=
_
_
_

(1)
1
_
_
_ =
_
_
_
_
_

_
0.2500
0.5000
0.7500
_

_
T
_

_
1
2
0 0
0 1 0
0 0
1
2
_

_
_

_
0.2500
0.5000
0.7500
_

_
_
_
_
_
1
2
= 0.7500

(1)
1
=
1
0.7500
_

_
0.2500
0.5000
0.7500
_

_ =
_

_
0.3333
0.6667
1.0000
_

_
(1021)

(2)
1
and r
12
, r
22
are determined from the following calculations, cf. (10-15)
114 Chapter 10 SOLUTION OF LARGE EIGENVALUE PROBLEMS
_

(2)
1
=
_

_
0.7500
0.5000
0.2500
_

_ , r
12
=
_

_
0.3333
0.6667
1.0000
_

_
T
_

_
1
2
0 0
0 1 0
0 0
1
2
_

_
_

_
0.7500
0.5000
0.2500
_

_ = 0.5833
r
22
=
_
_
_
_
_
_
_
_

_
0.7500
0.5000
0.2500
_

_ 0.5833
_

_
0.3333
0.6667
1.0000
_

_
_
_
_
_
_
_
_
= 0.4714

(2)
1
=
1
0.4714
_
_
_
_

_
0.7500
0.5000
0.2500
_

_ 0.5833
_

_
0.3333
0.6667
1.0000
_

_
_
_
_ =
_

_
1.1785
0.2357
0.7071
_

_
(1022)
Then, at the end of the 1st iteration the following matrices are obtained
_

_
R
1
=
_
0.7500 0.5833
0 0.4714
_

1
=
_

_
0.3333 1.1785
0.6667 0.2357
1.0000 0.7071
_

_
(1023)
The reader should verify that
1
R
1
=

1
. The corresponding matrices after the 2nd and 3rd iteration become
_

_
R
2
=
_
0.4787 0.1231
0 0.2611
_

2
=
_

_
0.5222 1.1078
0.6963 0.1231
0.8704 0.8616
_

_
(1024)
_

_
R
3
=
_
0.4943 0.0650
0 0.2529
_

3
=
_

_
0.6163 1.0583
0.7043 0.0623
0.7924 0.9339
_

_
(1025)
Convergence of the eigenmodes with the indicated number of digits were achieved after 14 iterations, where
10.3 Subspace Iteration 115
_

_
R
14
=
_
0.5000 0.0000
0 0.2500
_

14
=
_

_
0.7071 1.0000
0.7071 0.0000
0.7071 1.0000
_

_
(1026)
Presuming that convergence has occurred after the 14th iteration the following eigenvalues are obtained from
(10-10) and (10-12)
=
_

1
0
0
2
_
=
T
14
K
14
= R
1

=
_
2.0000 0.0000
0.0000 4.0000
_
=
_

(1)

(2)
_
=
14
=
_

_
0.7071 1.0000
0.7071 0.0000
0.7071 1.0000
_

_
_

_
(1027)

3
= 6, see (6-49). Then, the convergence rate of the iteration vectors becomes r
1
= max
_

2
,

2

3
_
= max
_
2
4
,
4
6
_
=
2
3
, cf. (10-17). This is a relatively large number, which is displayed in the rather slow convergence of the iterative
process. The convergence towards
(1)
and
(2)
occurred within the same iteration step. This suggests that the
convergence rate is uniform to all considered modes in the subspace.
Further it is noted that

(1)
=

2
2
_

_
1
1
1
_

_ =

2
4

_

_
0
1
2
_

_ +

2
4

_

_
2
1
0
_

_ =

2
4

(1)
0
+

2
4

(2)
0

(2)
=
_

_
1
0
1
_

_ =
1
2

_

_
0
1
2
_

_ +
1
2

_

_
2
1
0
_

_ =
1
2

(1)
0
+
1
2

(2)
0
_

_
(1028)
Hence, the 1st and 2nd eigenmode are originally in the subspace spanned by the basis
0
. As seen during the
iteration process these eigenmodes are moving out of the iterated subspace.
10.3 Subspace Iteration
As is the case for the simultaneous inverse vector iteration algorithmthe subspace iteration algo-
rithm presumes that a start subspace V
0
, spanned by the vector basis
0
=
_

(1)
0

(2)
0

(n
1
)
0
_
,
has been dened.
116 Chapter 10 SOLUTION OF LARGE EIGENVALUE PROBLEMS
At the kth iteration step of the iteration process a vector basis
k
=
_

(1)
k

(2)
k

(n
1
)
k
_
,
which spans the iterated subspace V
k
, has been obtained. Based on this a simultaneous inverse
vector iteration is performed

k+1
= A
k
, k = 0, 1, . . . (1029)
where A = K
1
M, cf. (8-4). Next, a Rayleigh-Ritz analysis is performed using

k+1
as a
Ritz basis, in order to obtain approximate solutions to the lowest n
1
eigenmodes and eigenval-
ues. This requires the solution of the following reduced generalized eigenvalue problem of the
dimension n
1
, cf. (6-10), (7-49)

K
k+1
Q
k+1
=

M
k+1
Q
k+1
R
k+1
, k = 0, 1, . . . (1030)

M
k+1
and

K
k+1
denote the mass and stiffness matrices projected on the subspace V
k+1
, cf.
(7-45)

M
k+1
=

T
k+1
M

k+1

K
k+1
=

T
k+1
K

k+1
_
_
_
(1031)
Q
k+1
=
_
q
(1)
k+1
q
(2)
k+1
q
(n
1
)
k+1
_
of the dimension n
1
n
1
contains the eigenvectors of the eigen-
value problem (10-30). In what follows the eigenvectors q
(i)
k+1
are assumed to normalized to
unit modal mass with respect to the projected mass matrix, i.e.
q
(i) T
k+1

M
k+1
q
(j)
k+1
=
_
_
_
0 , i = j
1 , i = j
(1032)
R
k+1
is a diagonal matrix containing the corresponding eigenvalues of (10-29) in the main
diagonal
R
k+1
=
_

1,k+1
0 0
0
2,k+1
0
.
.
.
.
.
.
.
.
. 0
0 0
n
1
,k+1
_

_
(1033)
The eigenvalues
j,k+1
, j = 1, . . . , n
1
indicates the estimate of the eigenvalues after the kth
iteration. These are all upperbounds to the corresponding eigenvalues of the full problem, cf.
(7-57).
At the end of the kth iteration step a new estimate of the lowest n
1
eigenvectors are determined
from, cf. (7-51)
10.3 Subspace Iteration 117

k+1
=

k+1
Q
k+1
(1034)
If the column vectors in Q
k+1
have been normalized to unit modal mass with respect to

M
k+1
,
the M-orthogonal column vectors of
k+1
will automatically be normalized to unit modal mass
with respect to M, cf. (7-55).
Next, the calculations in (10-30), (10-31), (10-34) are repeated with the new estimate of the
normalized eigenmodes
k+1
.
At convergence of the subspace iteration algorithm the lowest n
1
eigenvectors and eigenvalues
are retrieved from
=
_

(1)

(2)

(n
1
)

, =
_

1
0 0
0
2
0
.
.
.
.
.
.
.
.
. 0
0 0
n
1
_

_
= R

= Q

(1035)
At convergence, Q

can be shown to be a diagonal matrix, where the numerical value of the


components are equal to the eigenvalue of the original problem as indicated in (10-35).
It should be realized that subspace iteration involves iteration at two levels. Primary, a global si-
multaneous inverse vector iteration loop as dened by the index k is performed. Inside this loop
a secondary iteration process is performed at the solution of the eigenvalue problem (10-30).
Usually, the latter problem is solved iteratively by means of a general Jacobi iteration algorithm
as described in Section 9.3. Because the applied similarity transformations in the general Jacobi
algorithm are not orthonormal, the eigenvectors q
(j)
k
are not normalized to unit modal mass at
convergence. Hence, in order to fulll the requirements (10-32) this normalization should be
performed after convergence. Further, the eigenvalues will not be ordered in ascending order of
magnitude as presumed in (10-35), cf. Box 9.4.
The convergence rate for the components in the kth eigenmode and the kth eigenvalue, r
1,k
and
r
2,k
, are dened as
r
1,k
=

k

n
1
+1
r
2,k
=

2
k

2
n
1
+1
= r
2
1,k
_

_
, k = 1, . . . , n
1
(1036)
Hence, convergence is achieved at rst for the lowest mode and latest for mode k = n
1
, as has
been demonstrated in Example 10.2 below. This represents a marked difference from simul-
taneous inverse vector iteration, where as mentioned the convergence rate seems to be almost
identical for all modes contained in the subspace. A rule of thumb says that approximately 10
118 Chapter 10 SOLUTION OF LARGE EIGENVALUE PROBLEMS
subspace iterations are needed to obtain a solution for the components of
(1)
with 6 correct
digits.
Box 10.3: Subspace iteration algorithm
Given the n
1
-dimensional start vector basis
0
=
_

(1)
0

(2)
0

(n
1
)
0

. The base vectors


must be linearly independent, but the base vectors need not be normalized to unit modal
mass. Repeat the following items for k = 0, 1, . . .
1. Perform simultaneous inverse vector iteration:

k+1
= A
k
, A = K
1
M
2. Calculate projected mass and stiffness matrices:

M
k+1
=

T
k+1
M

k+1
,

K
k+1
=

T
k+1
K

k+1
3. Solve the generalized eigenvalue problem of dimension n
1
by means of a general
Jacobi iteration algorithm with the eigenvectors Q
k+1
normalized to unit modal mass
at exit:

K
k+1
Q
k+1
=

M
k+1
Q
k+1
R
k+1
4. Calculate new solution to eigenvectors:

k+1
=

k+1
Q
k+1
After convergence has been achieved the eigenvalues and eigenmodes normalized to unit
modal mass are obtained from:
=
_

1
0 0
0
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
n
1
_

_
= R

= Q

, =
_

(1)

(2)

(n
1
)

Finally, a Sturm sequence check should be performed to ensure that the lowest n
1
eigen-
pairs have been calculated.
In order to speed up the iteration process towards the n
1
modes actually wanted, the dimension
of the iterated subspace is sometimes increased to n
2
> n
1
. Then, the convergence rate of the
iteration vector the highest mode of interest decreases to
10.3 Subspace Iteration 119
r
1,n
1
=

n
1

n
2
+1
(1037)
In case of an adverse choice of the start basis vector
0
it may happen that one of the eigen-
modes searched for,
(j)
, j = 1, . . . , n
1
, is M-orthogonal to start subspace, i.e.

(j) T
M
(k)
0
= 0 , k = 1, 2, . . . , n
1
(1038)
In this case the subspace iteration algorithmconverges towards the eigenmodes
(1)
, . . . ,
(j1)
,

(j+1)
, . . . ,
(n
1
)
,
(n
1
+1)
. In principle a similar problem occurs in simultaneous inverse vector
iteration, although round-off errors normally eliminates this possibility.
Singular to subspace iteration is that eigenmodes contained in the initial basis
0
remain in
later iterated bases. Hence, if
(j)
, j = n
1
+ 1, . . . , n is contained in
0
, this mode will be
among the calculated modes.
In both cases we are left with the problem to decide whether the calculated n
1
eigenmodes are
in indeed the lowest n
1
modes of the full system. For this reason a subspace iteration should
always be followed by a Sturm sequence check. This is performed in the following way. Let
be a number slightly larger than the largest calculated eigenvalue
n
1
,
, and perform the
following Gauss factorization of the matrix KM
KM= LDL
T
(1039)
where L and D are given by (6-63), (6-64). Then, the number of eigenvalue less than is equal
to the number of negative elements in the diagonal of the diagonal matrix D., cf. Section 6.2. Al-
ternatively, the same information may be withdrawn fromthe number of sign changes in the sign
sequence sign
_
P
(n)
()
_
, sign
_
P
(n1)
()
_
, . . . , sign
_
P
(0)
()
_
, where P
(n1)
(), . . . , P
(0)
() de-
notes the Sturm sequence of characteristic polynomials, and P
(n)
() is a dummy positive com-
ponent in the sequence, cf. Section 6.3.
The marked difference between the subspace iteration algorithm and and the simultaneous in-
verse vector iteration algorithmis that the orthonormalization process to prevent ill-conditioning
of the iterated vector base in the former case is performed by an eigenvector approach related
to the Rayleigh-Ritz analysis, whereas a Gram-Schmidt orthogonalization procedure is used in
the latter case. There are no marked difference in the rate of convergence of the two algorithms.
Example 10.2: Subspace iteration
The generalized eigenvalue problem dened in Example 10.1 dened by (6-44) is considered again. Using the
same initial start basis (10-18) as in Example 10.1, the problem is solved in this example by means of subspace
iteration.
120 Chapter 10 SOLUTION OF LARGE EIGENVALUE PROBLEMS
At the 1st iteration step (k = 0) the simultaneous inverse vector iteration produces the vector basis

1
, which is
unchanged given by (10-20).
Based on

1
the following projected mass and stiffness matrices are calculated, cf. (6-44), (10-20), (10-31)

M
1
=

T
1
M

1
=
_

_
0.2500 0.7500
0.5000 0.5000
0.7500 0.2500
_

_
T
_

_
1
2
0 0
0 1 0
0 0
1
2
_

_
_

_
0.2500 0.7500
0.5000 0.5000
0.7500 0.2500
_

_ =
_
0.5625 0.4375
0.4375 0.5625
_

K
1
=

T
1
K

1
=
_

_
0.2500 0.7500
0.5000 0.5000
0.7500 0.2500
_

_
T
_

_
2 1 0
1 4 1
0 1 2
_

_
_

_
0.2500 0.7500
0.5000 0.5000
0.7500 0.2500
_

_ =
_
1.2500 0.7500
0.7500 1.2500
_
_

_
(1040)
The corresponding eigenvalue problem (10-30) becomes

K
1
Q
1
=

M
1
Q
1
R
1

_
1.2500 0.7500
0.7500 1.2500
_
_
q
(1)
1
q
(2)
1
_
=
_
0.5625 0.4375
0.4375 0.5625
_
_
q
(1)
1
q
(2)
1
_
_

1,1
0
0
2,1
_

R
1
=
_
2 0
0 4
_
=
_

1
0
0
2
_
, Q
1
=
_
2
2
2

2
2
2
_
(1041)
The estimate of the lowest eigenvectors after the 1st iteration becomes, cf. (10-34)

1
=

1
Q
1
=
_

_
0.2500 0.7500
0.5000 0.5000
0.7500 0.2500
_

_
_
2
2
2

2
2
2
_
=
_

2
2
1

2
2
0

2
2
1
_

_ =
_

(1)

(2)
_
(1042)
(10-41) and (10-42) indicate the exact eigenvalues and eigenmodes, cf. (6-49), (6-51). Hence, convergence is
obtained in just a single iteration. This is so because the start subspace V
0
, spanned by the vector basis
0
contains
the eigenmodes
(1)
and
(2)
as shown by (10-28). This property is singular to the subspace iteration algorithm
compared to the simultaneous inverse vector iteration technique.
Next, let us perform the same calculations using the start basis

0
=
_

(1)
0

(2)
0
_
=
_

_
1 1
2 2
3 3
_

_ (1043)
The simultaneous inverse vector iteration (10-29) provides, cf. (10-19)

1
= A
0
=
_

_
0.2917 0.1667 0.0417
0.0833 0.3333 0.0833
0.0417 0.1667 0.2917
_

_
_

_
1 1
2 2
3 3
_

_ =
_

_
0.7500 0.0833
1.0000 0.3333
1.2500 0.5833
_

_ (1044)
10.3 Subspace Iteration 121
The projected mass and stiffness matrices become

M
1
_

_
0.7500 0.0833
1.0000 0.3333
1.2500 0.5833
_

_
T
_

_
1
2
0 0
0 1 0
0 0
1
2
_

_
_

_
0.7500 0.0833
1.0000 0.3333
1.2500 0.5833
_

_ =
_
2.0625 0.0625
0.0625 0.2847
_

K
1
=
_

_
0.7500 0.0833
1.0000 0.3333
1.2500 0.5833
_

_
T
_

_
2 1 0
1 4 1
0 1 2
_

_
_

_
0.7500 0.0833
1.0000 0.3333
1.2500 0.5833
_

_ =
_
4.2500 0.2500
0.2500 1.5833
_
_

_
(1045)
The solution of the corresponding generalized eigenvalue problem (10-30) becomes
R
1
=
_
2.0534 0
0 5.5656
_
, Q
1
=
_
0.6982 0.0254
0.0851 1.8784
_
(1046)
The estimate of the lowest eigenmode after the 1st iteration becomes, cf. (10-34)

1
=

1
Q
1
=
_

_
0.7500 0.0833
1.0000 0.3333
1.2500 0.5833
_

_
_
0.6982 0.0254
0.0851 1.8784
_
=
_

_
0.5165 0.1375
0.7265 0.6516
0.8231 1.0640
_

_ (1047)
Correspondingly, after the 2nd, 7th and 14th iteration steps the following matrices are calculated
R
2
=
_
2.0118 0
0 5.2263
_
, Q
2
=
_
2.0171 0.1513
0.0887 5.3145
_

2
=
_

_
0.6195 0.0821
0.7241 0.5686
0.7535 1.1604
_

_
_

_
(1048)
R
7
=
_
2.0000 0
0 4.0533
_
, Q
7
=
_
2.0000 0.0011
0.0007 4.0661
_

7
=
_

_
0.7067 0.8711
0.7074 0.1155
0.7069 1.1020
_

_
_

_
(1049)
122 Chapter 10 SOLUTION OF LARGE EIGENVALUE PROBLEMS
R
14
=
_
2.0000 0
0 4.0002
_
, Q
14
=
_
2.0000 0.0000
0.0000 4.0002
_

14
=
_

_
0.7071 0.9931
0.7071 0.0068
0.7071 1.0068
_

_
_

_
(1050)
As seen the subspace iteration process determines the 1st eigenvalue and eigenvector after 7 iteration, whereas the
2nd eigenvector has not yet been calculated with the sufciently accuracy even after 14 iterations. By contrast
the simultaneous inverse vector iteration managed to achieve convergence for this quantity after 14 iterations, see
(10-26).
The 2nd calculated eigenvalue becomes
2,14
= 4.0002. Then, let = 4.05 and perform a Gauss factorization of
the matrix K4.05M, i.e.
K4.05M=
_

_
0.0250 1.0000 0.0000
1.0000 0.0500 1.0000
0.0000 1.0000 0.0250
_

_ =
LDL
T
=
_

_
1 0 0
40 1 0
0 0.0250 1
_

_
_

_
0.0250 0 0
0 39.950 0
0 0 0.0500
_

_
_

_
1 40 0
0 1 0.0250
0 0 1
_

_ (1051)
It follows that two components in the main diagonal of D are negative, from which is concluded that two eigenval-
ues are smaller than = 4.05. In turn this means that the two eigensolutions obtained by (10-46) are indeed the
lowest two eigensolutions of the original system.
Finally, consider the start vector basis

0
=
_

(1)
0

(2)
0
_
=
_

_
0 2
1 1
2 0
_

_ (1052)
Now,

(1) T
M
(1)
0
=

2
2
_

_
1
1
1
_

_
T
_

_
1
2
0 0
0 1 0
0 0
1
2
_

_
_

_
0
1
2
_

_ = 0

(1) T
M
(2)
0
=

2
2
_

_
1
1
1
_

_
T
_

_
1
2
0 0
0 1 0
0 0
1
2
_

_
_

_
2
1
0
_

_ = 0
_

_
(1053)
It follows that the lowest eigenmode
(1)
is M-orthogonal to the selected start vector basis. Hence, it should be
expected that the algorithm converges towards
(2)
and
(3)
. Moreover, in the present three dimensional case a
start subspace, which is M-orthogonal to
(1)
, must contain
(2)
and
(3)
. Actually, cf. (6-54)
10.4 Characteristic Polynomial Iteration 123

(2)
=
_

_
1
0
1
_

_ =
1
2

_

_
0
1
2
_

_ +
1
2

_

_
2
1
0
_

_ =
1
2

(1)
0
+
1
2

(2)
0

(3)
=

2
2
_

_
1
1
1
_

_ =

2
4

_

_
0
1
2
_

_ +

2
4

_

_
2
1
0
_

_ =

2
4

(1)
0
+

2
4

(2)
0
_

_
(1054)
Hence, convergence towards
(2)
and
(3)
should take place in a single iteration step. Actually, after the 1st
subspace iteration the following matrices are calculated
R
1
=
_
4 0
0 6
_
, Q
1
=
_
2.0000 2.1213
2.0000 2.1213
_

2
=
_

_
1.0000 0.7071
0.0000 0.7071
1.0000 0.7071
_

_
_

_
(1055)
The 2nd calculated eigenvalue becomes
2,1
= 6. In order to check whether
2,1
=
2
or
2,1
=
3
we choose
= 6.05, and perform a Gauss factorization of the matrix K6.05M, i.e.
K6.05M=
_

_
1.0250 1.0000 0.0000
1.0000 2.0500 1.0000
0.0000 1.0000 1.0250
_

_ =
LDL
T
=
_

_
1 0 0
0.9756 1 0
0 0.9308 1
_

_
_

_
1.0250 0 0
0 1.0744 0
0 0 0.0942
_

_
_

_
1 0.9756 0
0 1 0.9308
0 0 1
_

_ (1056)
It follows that three components in the main diagonal of D are negative, from which is concluded that the largest of
the two calculated eigenvalues must be equal to the largest eigenvalue of the original system, i.e.
2,1
=
3
. Still,
we do not know whether
1,1
=
1
or
1,1
=
2
. In order to investigate this another calculation is performed with
= 4.05. The Gauss factorization of the matrix K4.05Mhas already been performed as indicated by (10-51).
Since this result shows that two eigenvalues exist, which are smaller than = 4.05,
1,1
= 4 must be the largest
of these, and hence the 2nd eigenvalue of the original system.
10.4 Characteristic Polynomial Iteration
In this section it is assumed that the stiffness and mass matrices have been reduced to a three
diagonal form through a series of similarity transformations as explained in Section 9.4, corre-
sponding to, cf. (9-32)
124 Chapter 10 SOLUTION OF LARGE EIGENVALUE PROBLEMS
K =
_

1

1
0 0 0

1

2

2
0 0
0
2

3
0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0
n1

n1
0 0 0
n1

n
_

_
(1057)
M=
_

1

1
0 0 0

1

2

2
0 0
0
2

3
0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0
n1

n1
0 0 0
n1

n
_

_
(1058)
In principle polynomial iteration methods works equally well on fully populated stiffness and
mass matrices. However, the computational efforts become too extensive to make them com-
petitive in this case.
Now, the characteristic equation of the generalized eigenvalue problem can be written in the
following form, cf. (6-6)
P() = P
(0)
() = det
_
KM
_
=
det
_
_
_
_
_
_
_
_
_
_
_

1

1

1
0 0 0

1

2

2

2

2
0 0
0
2

2

3

3
0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0
n1

n1

n1

n1
0 0 0
n1

n1

n

n
_

_
_
_
_
_
_
_
_
_
_
_
=
_

n
_
P
(1)
()
_

n1

n1
_
2
P
(2)
() (1059)
The last statement in (10-59) is obtained by expanding the determinant after the components in
the last row. P
(1)
() and P
(2)
() denote the characteristic polynomials obtained by omitting
the last row and column, and the last two rows and columns in the matrix KM, respectively,
cf. (6-85). The validity of the result (10-59) has been demonstrated for a 4-dimensional case
10.4 Characteristic Polynomial Iteration 125
in Example 10.3. In turn, P
(1)
() may be expressed in terms of P
(2)
() and P
(3)
() by a
similar expression. Actually, the complete Sturm sequence of characteristic polynomials may
be calculated recursively from the algorithm
P
(n1)
() =
_

1
_
P
(n2)
() =
_

1
__

2
_

1
_
2
P
(nm)
() =
_

m
_
P
(nm+1)
()
_

m1

m1
_
2
P
(nm+2)
() , m = 3, 4, . . . , n
_

_
(1060)
The effectiveness of characteristic polynomial iteration methods for matrices on three diagonal
form relies on the result (10-60).
Assume, that the jth eigensolution
_

j
,
(j)
_
is wanted. At rst one needs to determine two
gures
0
and
1
fullling
j1
<
0
<
j
<
1
<
j+1
. This is done based on the sequence of
signs sign(P
(n)
()), sign(P
(n1)
()), . . . , sign(P
(1)
()), sign(P
(0)
()), in which the number
of sign changes indicates the total number of eigenvalues smaller than , and where P
(n)
() is
a dummy positive gure, cf. Section 6.3.
Below, on Fig. 10-2 are marked two points
k1
and
k
on the -axis in the vicinity of the
eigenvalue searched for, which is
1
in the illustrated case. The values of the characteristic
polynomial in these points, P(
k1
) and P(
k
), may easily be calculated by means of (10-60)
(notice that P() = P
(0)
()). The line through the points
_

k1
, P(
k1
)
_
and
_

k
, P(
k
)
_
has the equation
y() = P(
k
) +
_
P(
k
) P(
k1
)
_

k

k1
(1061)

1

2

3

k1

k+1
P()
y() = P(
k
) +
_
P(
k
) P(
k1
)
_

k

k1
Fig. 102 Secant iteration of characteristic equation towards
1
.
The line dened by (10-61) intersects the -axis at the point
k+1
. It is clear that this point
will be closer to
j
than both
k1
and
k
. The intersection point of the line with the -axis is
obtained as the solution to the equation y() = y(
k+1
) = 0, which is given as

k+1
=
k

P(
k
)
P(
k
) P(
k1
)
_

k1
_
(1062)
126 Chapter 10 SOLUTION OF LARGE EIGENVALUE PROBLEMS
Next, the iteration index is raised to k + 1, and a new intersection point
k+2
is obtained. The
sequence
0
,
1
,
2
, . . . converges relatively fast to the eigenvalue
j
as demonstrated below in
Example 10.4.
Box 10.4: Characteristic polynomial iteration algorithm
In order to calculate the jth eigenvalue
j
and the jth eigenvector
(j)
the following items
are performed
1. Based on the sequence of signs sign(P
(n)
()), sign(P
(n1)
()), . . . , sign
_
P
(1)
()),
sign
_
P
(0)
()) of the Sturm sequence of characteristic polynomials determine two
gures
0
and
1
fullling the inequalities:

j1
<
0
<
j
<
1
<
j+1
2. Perform secant iteration in search for
j
=

according to the algorithm:

k+1
=
k

P(
k
)
P(
k
)P(
k1
)
_

k1
_
3. Determine the un-normalized eigenmode

(j)
from the algorithm (10-65).
4. Normalize the eigenmode to unit modal mass:

(j)
=

(j)

(j) T
M

(j)
Alternatively, the eigenvalue
j
may be determined by means of Sturm sequence check, where
the interval ]
0
,
1
[ is increasingly narrowed around the eigenvalue
j
by bisection of the pre-
vious interval. This algorithm, which is merely telescope method described in Section 6.2, will
generally converge much slower than the secant iteration algorithm.
Finally, the components
_

(j)
1
,
(j)
2
, . . . ,
(j)
n
_
of the eigenmode
(j)
are determined as non-
trivial solutions to the linear equations
det
_
K
j
M
_

(j)
= 0
_

1

1

1
0 0 0

1

2

2

2

2
0 0
0
2

2

3

3
0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0
n1

n1

n1

n1
0 0 0
n1

n1

n

n
_

_
_

(j)
1

(j)
2

(j)
3
.
.
.

(j)
n1

(j)
n
_

_
=
_

_
0
0
0
.
.
.
0
0
_

_
(1063)
10.4 Characteristic Polynomial Iteration 127
Let

(j)
=
_

(j)
1
,

(j)
2
, . . . ,

(j)
n
_
denote the eigenmode with components arbitrarily normal-
ized. Setting

(j)
1
= 1 the equations (10-62) may be solved recursively from above by the
following algorithm

(j)
2
=

1
1

(j)
3
=

2
1

2

(j)
2

(j)
m
=

m2

m2

m1

m1

(j)
m2


m1

m1

m1

m1

(j)
m1
, m = 4, . . . , n
_

_
(1064)
Hence, the determination of the components of the vector

(j)
is almost free. Obvious, the
indicated algorithm breaks down, if any of the denominators
m1

j

m1
= 0. This means
that the algorithm should be extended with alternatives to deal with such exceptions.
Finally, the eigenmode

(j)
should be normalized to unit modal mass as follows

(j)
=

(j)

(j) T
M

(j)
(1065)
Example 10.3: Evaluation of determinant
The determinant of the following matrix on a three diagonal form of the dimension 4 4 is wanted
K =
_

1

1
0 0

1

2

2
0
0
2

3

3
0 0
3

4
_

_
(1066)
Expansion of the determinant after the components in the 4th row provides
det
_
K
_
= P
(0)
=
4
det
_
_
_
_

1

1
0

1

2

2
0
2

3
_

_
_
_
_
3
det
_
_
_
_

1

1
0

1

2
0
0
2

3
_

_
_
_
_ =

4
det
_
_
_
_

1

1
0

1

2

2
0
2

3
_

_
_
_
_
2
3
det
__

1

1

1

2
__
=
4
P
(1)

2
3
P
(2)
(1067)
(10-67) has the same recursive structure as described by (10-59).
128 Chapter 10 SOLUTION OF LARGE EIGENVALUE PROBLEMS
Example 10.4: Characteristic polynomial iteration
The generalized eigenvalue problem dened in Example 6.2 is considered again. Calculate the 3rd eigenvalue by
secant iteration on the characteristic polynomial, and next determine the corresponding eigenvector.
At rst a calculation with = 2.5 is performed, which produces the following results
K2.5M=
_

_
0.7500 1.0000 0.0000
1.0000 1.5000 1.0000
0.0000 1.0000 0.7500
_

_
_

_
P
(3)
(2.5) = 1 , sign(P
(3)
(2.5)) = +
P
(2)
(2.5) = 0.7500 , sign(P
(2)
(2.5)) = +
P
(1)
(2.5) = 0.7500 1.5000 (1)
2
= 0.1250 , sign(P
(1)
(2.5)) = +
P
(0)
(2.5) = 0.7500 0.1250 (1)
2
0.7500 = 0.6563 , sign(P
(0)
(2.5)) =
_

_
(1068)
Hence, the sign sequence of the Sturm sequence becomes +++. One sign change occurs in this sequence from
which is concluded that the lowest eigenvalue
1
is smaller than = 2.5.
Next, a calculation with = 5.5 is performed, which provided the results
K5.5M=
_

_
0.7500 1.0000 0.0000
1.0000 1.5000 1.0000
0.0000 1.0000 0.7500
_

_
_

_
P
(3)
(5.5) = 1 , sign(P
(3)
(5.5)) = +
P
(2)
(5.5) = 0.7500 , sign(P
(2)
(5.5)) =
P
(1)
(5.5) = (0.7500)
_
1.5000
_
(1)
2
= 0.1250 , sign(P
(1)
(5.5)) = +
P
(0)
(5.5) = (0.7500) 0.1250 (1)
2
(0.7500) = 0.6563 , sign(P
(0)
(5.5)) = +
_

_
(1069)
Now, the sign sequence of the Sturm sequence becomes +++, in which two sign changes occur, from which is
concluded that the lowest two eigenvalues
1
and
2
are both smaller than = 5.5.
Finally a calculation with = 6.5 is performed, which provided the results
K6.5M=
_

_
1.2500 1.0000 0.0000
1.0000 2.5000 1.0000
0.0000 1.0000 1.2500
_

_
_

_
P
(3)
(6.5) = 1 , sign(P
(3)
(6.5)) = +
P
(2)
(6.5) = 1.2500 , sign(P
(2)
(6.5)) =
P
(1)
(6.5) = (1.2500)
_
2.5000
_
(1)
2
= 2.1250 , sign(P
(1)
(6.5)) = +
P
(0)
(6.5) = (1.2500) 2.1250 (1)
2
(1.2500) = 1.4063 , sign(P
(0)
(6.5)) =
_

_
(1070)
In this cse the sign sequence of the Sturm sequence becomes ++, corresponding to three sign changes. Hence,
it is concluded that all three eigenvalues
1
,
2
and
3
are smaller than = 6.5.
10.4 Characteristic Polynomial Iteration 129
From the Sturm sequence checks it is concluded that 5.5 <
3
< 6.5. Then, we may use the following start
values,
0
= 5.5 and
1
= 6.5, in the secant iteration algorithm. Moreover P(5.5) = P
(0)
(5.5) = 0.6563 and
P(6.5) = P
(0)
(6.5) = 1.4063, cf. (10-69) and (10-70). Then, from (10-61) it follows for k = 1

2
= 6.5
(1.4063)
(1.4063) 0.6563
(6.5 5.5) = 5.8182 (1071)
Next, P(
2
) = P(5.8182) = 0.3156 is calculated by means of the algorithm (10-60), and a new value
3
can be
obtained from

3
= 5.8182
0.3156
0.3156 (1.4063)
(5.8182 6.5) = 5.9431 (1072)
During the next 5 iterations the following results were obtained

4
= 6.00900500472288

5
= 5.99960498912941

6
= 5.99999734553262

7
= 6.00000000078659

8
= 6.00000000000000
_

_
(1073)
As seen the convergence of the secant iteration algorithm is very fast.
The linear equation (10-63) attains the form
_
K6.0000M
_

(3)
= 0
_

_
1 1 0
1 2 1
0 1 1
_

_
_

(3)
1

(3)
2

(3)
3
_

_ =
_

_
0
0
0
_

_ (1074)
Setting

(3)
1
= 1 the algorithm (10-64) now provides

(3)
2
=
(1)
(1)
1 = 1

(3)
3
=
(1)
(1)
1
(2)
(1)
(1) = 1
_

(3)
=
_

_
1
1
1
_

_ (1075)
Normalization to unit modal mass provides, cf. (6-54)

(3)
=

2
2
_

_
1
1
1
_

_ (1076)
130 Chapter 10 SOLUTION OF LARGE EIGENVALUE PROBLEMS
10.5 Exercises
10.1 Given the following mass- and stiffness matrices
M=
_

_
2 0 0
0 2 1
0 1 1
_

_
, K =
_

_
6 1 0
1 4 1
0 1 2
_

_
(a.) Calculate the two lowest eigenmodes and corresponding eigenvalues by simultaneous
inverse vector iteration with the start vector basis

0
=
_

(1)
0

(2)
0

=
_

_
1 1
1 0
1 1
_

_
10.2 Given the symmetric matrices Mand Kof dimension n.
(a.) Write a MATLABprogram, which for given start basis performs simultaneous inverse
vector iteration for the determination of the lowest n
1
eigenmodes and eigenvalues.
10.3 Consider the general eigenvalue problem in Exercise 10.1.
(a.) Calculate the two lowest eigenmodes and corresponding eigenvalues by subspace it-
eration using the same start basis as in Exercise 10.1.
10.4 Given the symmetric matrices Mand Kof dimension n.
(a.) Write a MATLAB program, which for given start basis performs subspace iteration
for the determination of the lowest n
1
eigenmodes and eigenvalues.
10.5 Consider the general eigenvalue problem in Exercise 10.1.
(a.) Calculate the 3rd eigenmode and eigenvalue by Sturm sequence iteration (telescope
method).
10.6 Given the symmetric matrices Mand Kof dimension n on three diagonal form.
(a.) Write a MATLAB program, which performs Sturm sequence check and secant iter-
ation iteration for the determination of the jth eigenvalue, and next determines the
corresponding eigenvector.
Index
adjoint eigenvalue problem, 11
aeroelastic damping matrix, 7
alternative inverse vector iteration, 60, 61
central difference operator, 26
characteristic equation, 8, 15, 25, 28, 53, 108,
124
characteristic polynomial, 8, 18, 24, 124
characteristic polynomial iteration, 54, 91, 108,
123, 125, 126, 128
Choleski decomposition, 29, 31, 35
compatible matrix and vector norms, 47, 50
complex unit, 8
convergence rate of iteration vector, 57, 67, 68,
73, 112, 115
convergence rate of Rayleigh quotient, 58, 67,
70, 112
convergence rate of the iteration vector, 118
coupled 1st order differential equations, 11
cubic convergence, 70
damped eigenmode, 11
damped eigenvalue, 11
damped modal mass, 12
damping matrix, 7
degree of freedom, 33, 107
diagonal matrix, 55, 111, 116, 117
dynamic load vector, 7
eigenmode, 9, 79
eigenvalue, 8
eigenvalue separation principle, 24, 26
error analysis of calculated eigenvalues, 33, 47,
51
error vector, 47
Euclidean matrix norm, 50
Euclidean vector norm, 47, 49
forward vector iteration, 54, 6365
forward vector iteration with Gram-Schmidt or-
thogonalization, 73, 74, 91
forward vector iteration with shift, 68, 69
Gauss factorization, 18, 20, 24, 29, 119, 122,
123
general Jacobi iteration method, 80, 85, 88, 117,
118
generalized eigenvalue problem, 118
generalized eigenvalue problem, 8, 11, 15, 22,
23, 27, 29, 35, 42, 54, 5961, 64, 71,
79, 80, 85, 88, 90, 95, 96, 100, 107,
109, 113, 116, 121, 124
Gram-Schmidt orthogonalization, 73, 99, 109,
119
Guyan reduction, 33
Hilbert matrix norm, 47, 50, 51
HOQR iteration method, 80, 101
Householder reduction method, 90, 95, 96, 101,
109
innity matrix norm, 50
innity vector norm, 49
inverse vector iteration, 54, 59, 65, 109
inverse vector iteration with Gram-Schmidt or-
thogonalization, 73, 74, 91
inverse vector iteration with Rayleigh quotient
shift, 70, 71
inverse vector iteration with shift, 66, 67, 70
iterative similarity transformation method, 79,
80
Jordan boxes, 10
Jordan normal form, 10
Kroneckers delta, 98
linear convergence, 57, 73
131
132 INDEX
linear viscous damping, 7
lower triangular matrix, 18, 21, 29, 31, 36
M-orthonormalization of vector basis, 111
mass matrix, 7, 91, 123
matrix norm, 49
modal coordinate, 40, 41, 55
modal mas, 112
modal mass, 9, 16, 35, 39, 43, 47, 51, 54, 63, 71,
73, 74, 79, 110, 111, 118, 126
modal matrix, 9, 16, 51, 79, 107
modal space, 66
omission criteria for similarity transformation,
82, 83, 87, 88
one matrix norm, 50
one vector norm, 49
orthogonality property, 9, 12, 55
orthonormal matrix, 10, 80, 81, 91, 98
p vector norm, 49
partitioned matrix, 33
permutation of numbers, 81
positive denite matrix, 7, 30
positive semi-denite matrix, 7
projected mass matrix, 41, 45, 46, 116, 118, 120,
121
projected stiffness matrix, 41, 45, 46, 116, 118,
120, 121
QR iteration method, 91, 98, 100, 101
quadratic convergence, 59
Rayleigh quotient, 38, 39, 53, 55, 57, 60, 61, 63,
64, 66, 70, 71, 76, 110
Rayleighs principle, 38
Rayleigh-Ritz analysis, 33, 38, 40, 43, 44, 108,
116, 119
relative error of iteration vector, 57, 60
relative error of Rayleigh quotient, 58, 60
relatively errors of Rayleigh quotient, 65
Ritz basis, 40, 43, 46, 108, 116
secant iteration, 126, 128, 129
self-adjoint eigenvalue problem, 13
shift on stiffness matrix, 27, 28, 65
similarity transformation, 10, 29, 54, 79, 85, 90,
98, 100, 123
similarity transformation matrix, 29, 79, 85, 91,
95, 96, 98, 100
simple eigenvalues, 8
simultaneous inverse vector iteration, 107109,
111113, 116120
special eigenvalue problem, 8, 29, 30, 48, 51,
80, 81, 83, 95, 96, 98, 100
special Jacobi iteration method, 8083, 91
spectral decomposition, 30
spectral matrix norm, 50
standard eigenvalue problem, 90, 91, 109
state vector formulation, 11
static condensation, 33, 36, 44
stiff-body motion, 28
stiffness matrix, 7, 93, 98, 123
Sturm sequence, 24, 119, 125, 126
Sturm sequence check, 25, 67, 109, 112, 118,
119, 126, 128, 129
Sturm sequence iteration, 109
subspace iteration, 100, 107, 108, 111, 115, 118
120
sweep, 82, 83, 87, 89
symmetric matrix, 7, 90, 91
telescope method, 19, 126
three diagonal matrix, 90, 95, 101, 109, 123, 127
two vector norm, 49
undamped circular eigenfrequency, 8, 26
undamped eigenvibration, 8
unitary matrix, 10
upper three diagonal matrix, 101
upper triangular matrix, 21, 98, 109111
vector basis, 39, 40, 99, 108
vector iteration method, 53
vector iteration with deation, 73
vector iteration with Gram-Schmidt orthogonal-
ization, 73
vector iteration with shift, 65
vector norm, 49
vibrating string, 26
wave equation, 26
APPENDIX A
Solutions to Exercises
133
134 Chapter A Solutions to Exercises
A.1 Exercise 6.1
Given the following mass- and stiffness matrices
M=
_

_
1 0 0
0 2 0
0 0
1
2
_

_
, K =
_

_
2 1 0
1 2 0
0 0 3
_

_
(1)
1. Calculate the eigenvalues and eigenmodes normalized to unit modal mass.
2. Determine two vectors that are M-orthonormal, but are not eigenmodes.
3. Show that the eigenvalue separation principle is valid for the considered example.
SOLUTIONS:
Question 1:
The generalized eigenvalue problem (6-5) becomes
_

_
2
j
1 0
1 2 2
j
0
0 0 3
1
2

j
_

_
_

(j)
1

(j)
2

(j)
3
_

_
=
_

_
0
0
0
_

_
(2)
Upon evaluating the determinant of the coefcient matrix after the 3rd row, the characteristic equation
(6-6) becomes
P() = P
(0)
() = det
_
_
_
_

_
2
j
1 0
1 2 2
j
0
0 0 3
1
2

j
_

_
_
_
_
=
_
3
1
2

j
__
_
2
j
_
_
2 2
j
_

_
1
_
2
_
=
_
3
1
2

j
__
3 6
j
+ 2
2
j
_
= 0

j
=
_

_
1
2
_
3

3
_
, j = 1
1
2
_
3 +

3
_
, j = 2
6 , j = 3
(3)
The largest eigenvalue
3
= 6 is obtained when the 1st factor in (3) is equal to 0, whereas the two lowest
solutions corresponds vanishing of the 2nd factor.
Because the 3rd eigenmode is decoupled from the 1st and 2nd the solution method is slightly different in
this case. As seen be inspection the solutions have the form
A.1 Exercise 6.1 135

(j)
=
_

(j)
1

(j)
2
0
_

_
, j = 1, 2

(3)
=
_

_
0
0
1
_

_
_

_
(4)
The 1st and 2nd components of the 1st and 2nd eigenmodes,
(j)
1
and
(j)
2
are determined from the two
rst equations in (2). We choose to set
(j)
1
= 1, and determine
(j)
2
from the 1st equations. Notice that
we may as well have determined
(j)
2
from the 2nd equation. Then
_
2
j
_
1
(j)
2
= 0
(j)
2
=
_
1
2
_
1 +

3
_
, j = 1
1
2
_
1

3
_
, j = 2
(5)
The modal masses become
M
j
=

(j) T
M

(j)
=
_

_
1

(j)
2
0
_

_
T
_

_
1 0 0
0 2 0
0 0
1
2
_

_
_

_
1

(j)
2
0
_

_
= 1 + 2
_

(j)
2
_
2
=
_
3 +

3 , j = 1
3

3 , j = 2
(6)
M
3
=

(3) T
M

(3)
=
_

_
0
0
1
_

_
T
_

_
1 0 0
0 2 0
0 0
1
2
_

_
_

_
0
0
1
_

_
=
1
2
(7)

(1)
denotes the 1st eigenmode normalized to unit modal mass. This is related to

(1)
in the following
way

(1)
=
1

M
1

(1)
=
1
_
3 +

3
_

_
1
1
2
_
1 +

3
_
0
_

_
=
_

_
0.4597
0.6280
0
_

_
(8)
The other modes are treated in the same manner, which results in the following eigensolutions
=
_

1
0 0
0
2
0
0 0
3
_

_
=
_

_
1
2
_
3

3
_
0 0
0
1
2
_
3 +

3
_
0
0 0 6
_

_
=
_

(1)

(2)

(3)
_
=
_

_
0.4597 0.8881 0
0.6280 0.3251 0
0 0 1.4142
_

_
_

_
(9)
136 Chapter A Solutions to Exercises
Question 2:
Consider the vectors
v
1
=
_

_
1
0
0
_

_
, v
2
=
_

_
0

2
2
0
_

_
(10)
Upon insertion the following relations are seen to be valid
v
T
1
Mv
1
= 1 , v
T
2
Mv
2
= 1 , v
T
1
Mv
2
= 0 (11)
Hence, v
1
and v
2
are mutually M-orthonormal. However,
Kv
1
=
_

_
2
1
0
_

_
=
1
Mv
1
=
_

_
1
2
_
3

3
_
0
0
_

_
Kv
2
=
_

2
2

2
0
_

_
=
2
Mv
2
=
_

_
0

2
4
_
3 +

3
_
0
_

_
_

_
(12)
Hence, neither v
1
nor v
2
are eigenmodes.
Question 3:
The eigenvalues
(0)
j
have been calculated by (3). Next,
P
(1)
(
(1)
) = det
__
2
(1)
j
1
1 2 2
(1)
j
__
=
_
_
2
(1)
j
_
_
2 2
(1)
j
_

_
1
_
2
_
=
_
3 6
(1)
j
) + 2
_

(1)
j
_
2
_
= 0

(1)
j
=
_
1
2
_
3

3
_
, j = 1
1
2
_
3 +

3
_
, j = 2
(13)
P
(2)
(
(2)
) = det
__
2
(2)
j
__

(2)
j
= 2 (14)
A.1 Exercise 6.1 137
Then, (6-86) attains the following forms for m = 0 and m = 1
0
(0)
1

(1)
1

(0)
2

(1)
2

(0)
3

0
1
2
_
3

3
_

1
2
_
3

3
_

1
2
_
3 +

3
_

1
2
_
1 +

3
_
6 (15)
0
(1)
1

(2)
1

(1)
2

0
1
2
_
3

3
_
2
1
2
_
3 +

3
_
(16)
Hence, (6-86) holds for the considered example.
(0)
1
=
(1)
1
and
(0)
2
=
(1)
2
, because of the decoupling
of the 3rd eigenmode from the 1st and 2nd eigenmode.
138 Chapter A Solutions to Exercises
A.2 Exercise 6.2
Given the following mass- and stiffness matrices
M=
_
2 0
0 0
_
, K =
_
6 1
1 4
_
(1)
1. Calculate the eigenvalues and eigenmodes normalized to unit modal mass.
2. Perform a shift = 3 on Kand calculate the eigenvalues and eigenmodes of the new problem.
SOLUTIONS:
Question 1:
The generalized eigenvalue problem (6-5) is written on the form
_
2 0
0 0
__

(j)
1

(j)
2
_
=
1

j
_
6 1
1 4
__

(j)
1

(j)
2
_
, j = 1, 2 (2)
Obviously, (2) has the solution

2
= ,
(2)
=
_

(2)
1

(2)
2
_
=
_
0
1
_
(3)
Hence,
2
= is an eigenvalue. This is so because the mass matrix is singular, and has zeroes in the
last row and column. Since, the modal mass M
2
related to eigenmode
(2)
is zero, this mode cannot be
normalized in the usual manner. In Section 7.1 the problem of innite eigenvalues will be thoroughly
dealt with.
The other eigensolution may be obtained by the standard approach. Then, the eigenvalue problem (2) is
written on the form
_
6 2
1
1
1 4
__

(1)
1

(1)
2
_
=
_
0
0
_
(4)
The characteristic equation (6-6) becomes
det
__
6 2
1
1
1 4
__
= 4
_
6 2
1
_

_
1
_
2
= 23 8
1
= 0
1
=
23
8
(5)
A.2 Exercise 6.2 139
We choose to set
(1)
1
= 1, and determine
(1)
2
from the 1st equations. Then
_
6 2
1
_
1
(1)
2
= 0
(1)
2
=
1
4
(6)
The modal mass becomes
M
1
=

(1) T
M

(1)
=
_
1
1
4
_
T
_
2 0
0 0
__
1
1
4
_
= 2 (7)
Then, the eigenmode normalized to unit modal mass
(1)
becomes

(1)
=
1

M
1

(1)
=
1

2
_
1
1
4
_
=
_
0.7071
0.1768
_
(8)
Hence, the following eigensolutions have been obtained
=
_

1
0
0 0
0
_
=
_
23
8
0
0
_
, =
_

(1)

(2)
_
=
_
0.7071 0
0.1768 1
_
(9)
Question 2:
(6-103) attains the form

K = K3M=
_
6 1
1 4
_
3
_
2 0
0 0
_
=
_
0 1
1 4
_
(10)
The eigenvalue problem (6-102) becomes
__
0 1
1 4
_

j
_
2 0
0 0
___

(1)
1

(1)
2
_
=
_
0
0
_
(11)
For the same reason as in question 1,
2
= is still an eigenvalue with the eigenmode given by (3).
The characteristic equation for the 1st eigenvalue becomes, cf. (5)
det
__
2
1
1
1 4
__
= 4
_
2
1
_

_
1
_
2
= 18
1
= 0
1
=
1
8
_
=
23
8
3
_
(12)
140 Chapter A Solutions to Exercises
Let
(1)
1
= 1, and determine
(1)
2
from the 1st equations of (11)
_
2
1
_
1
(1)
2
= 0
(1)
2
=
1
4
(13)
which is identical to (6). Hence
(1)
is unaffected by the shift as expected, cf. the comments following
(6-103). The eigensolutions are unchanged as given by (9), save that
1
=
1
8
.
A.3 Exercise 6.3 141
A.3 Exercise 6.3
The eigensolutions with eigenmodes normalized to unit modal mass of a 2-dimensional generalized
eigenvalue problem are given as
=
_

1
0
0
2
_
=
_
1 0
0 4
_
, =
_

(1)

(2)
_
=
_
2
2

2
2

2
2

2
2
_
(1)
1. Calculate M and K.
SOLUTIONS:
Question 1:
From 6-15) and (6-18) follows
M=
_

1
_
T
m
1
(2)
K =
_

1
_
T
k
1
(3)
Since it is known that the eigenmodes have been normalized to unit modal mass it follows from (6-16)
and (6-18) that
m= I , k = (4)
The inverse of the modal matrix becomes

1
=
_
2
2

2
2

2
2

2
2
_
1
=
_
2
2

2
2

2
2

2
2
_
(5)
Of course, (5) can be obtained by direct calculation. Alternatively, the result may be obtained from the
following arguments. Notice that is orthonormal, so
1
=
T
, cf. (6-19). Additionally, the modal
matrix is symmetric, i.e. =
T
, from which the indicated result follows.
Insertion of (4) and (5) into (1) and (2) provides
M=
_
2
2

2
2

2
2

2
2
_
T
_
1 0
0 1
__
2
2

2
2

2
2

2
2
_
=
_
1 0
0 1
_
(6)
K =
_
2
2

2
2

2
2

2
2
_
T
_
1 0
0 4
__
2
2

2
2

2
2

2
2
_
=
_
2.5 1.5
1.5 2.5
_
(7)
Actually, since M= I, the considered eigenvalue problem is of the special type, cf. the remarks subse-
quent to (6-5).
142 Chapter A Solutions to Exercises
A.4 Exercise 6.4: Theory
Gauss Elimination
Given a symmetric matrix K of the dimension n n with the components K
ij
= K
ji
. Consider the
static equilibrium equation
Kx = f
_

_
K
11
K
12
K
13
K
1n
K
21
K
22
K
23
K
2n
K
31
K
32
K
33
K
3n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
K
n1
K
n2
K
n3
K
nn
_

_
_

_
x
1
x
2
x
3
.
.
.
x
n
_

_
=
_

_
f
1
f
2
f
3
.
.
.
f
n
_

_
(1)
In order to have a one at the 1st element of the main diagonal of the coefcient matrix the 1st equation is
divided with K
11
resulting in
_

_
1 K
(1)
12
K
(1)
13
K
(1)
1n
K
21
K
22
K
23
K
2n
K
31
K
32
K
33
K
3n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
K
n1
K
n2
K
n3
K
nn
_

_
_

_
x
1
x
2
x
3
.
.
.
x
n
_

_
=
_

_
f
(1)
1
f
2
f
3
.
.
.
f
n
_

_
(2)
where
K
(1)
1j
=
K
1j
K
11
, j = 2, . . . , n
f
(1)
1
=
f
1
K
11
_

_
(3)
In turn, the 1st equation of (2) is multiplied with K
i1
, i = 2, . . . , n, and the resulting equation is
withdrawn from the ith equation. This will produce a zero in the ith row of the 1st column, corresponding
to the following system of equations
_

_
1 K
(1)
12
K
(1)
13
K
(1)
1n
0 K
(1)
22
K
(1)
23
K
(1)
2n
0 K
(1)
32
K
(1)
33
K
(1)
3n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 K
(1)
n2
K
(1)
n3
K
(1)
nn
_

_
_

_
x
1
x
2
x
3
.
.
.
x
n
_

_
=
_

_
f
(1)
1
f
(1)
2
f
(1)
3
.
.
.
f
(1)
n
_

_
(4)
A.4 Exercise 6.4: Theory 143
where
K
(1)
ij
= K
ij
K
i1
K
(1)
1j
, i = 2, . . . , n , j = 2, . . . , n
f
(1)
i
= f
i
K
i1
f
(1)
1
, i = 2, . . . , n
_

_
(5)
Next, the 2nd equation is divided with K
(1)
22
, so the coefcient in the 2nd component in the main diagonal
becomes equal to 1. In turn, the resulting 2nd equation is multiplied with K
(1)
i2
, i = 3, . . . , n, and the
resulting equation is withdrawn from the ith equation. This will produce zeros in the ith row of the 2nd
column below the main diagonal, corresponding to the system of equations
_

_
1 K
(1)
12
K
(1)
13
K
(1)
1n
0 1 K
(2)
23
K
(2)
2n
0 0 K
(2)
33
K
(2)
3n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 K
(2)
n3
K
(2)
nn
_

_
_

_
x
1
x
2
x
3
.
.
.
x
n
_

_
=
_

_
f
(1)
1
f
(2)
2
f
(2)
3
.
.
.
f
(2)
n
_

_
(6)
where
K
(2)
2j
=
K
(1)
2j
K
(1)
22
, j = 3, . . . , n
f
(2)
2
=
f
(1)
2
K
(1)
22
K
(2)
ij
= K
(1)
ij
K
(1)
i2
K
(2)
2j
, i = 3, . . . , n , j = 3, . . . , n
f
(2)
i
= f
(1)
i
K
(1)
i2
f
(2)
2
, i = 3, . . . , n
_

_
(7)
The process of producing ones in the main diagonal, and zeros below the main diagonal is continued for
all n columns resulting in the following system of linear equations
_

_
1 K
(1)
12
K
(1)
13
K
(1)
1n
0 1 K
(2)
23
K
(2)
2n
0 0 1 K
(3)
3n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 1
_

_
_

_
x
1
x
2
x
3
.
.
.
x
n
_

_
=
_

_
f
(1)
1
f
(2)
2
f
(3)
3
.
.
.
f
(n)
n
_

_
(8)
Next, (1) are solved simultaneous with n righthand sides, where the loads form the columns in a unit
matrix. The n solution vectors X = [x
1
x
2
x
3
x
n
] are organized in the matrix equationi.e.
144 Chapter A Solutions to Exercises
KX = I
_

_
K
11
K
12
K
13
K
1n
K
21
K
22
K
23
K
2n
K
31
K
32
K
33
K
3n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
K
n1
K
n2
K
n3
K
nn
_

_
_

_
x
11
x
12
x
13
x
1n
x
21
x
22
x
23
x
2n
x
31
x
32
x
33
x
3n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
x
n1
x
n2
x
n3
x
nn
_

_
=
_

_
1 0 0 0
0 1 0 0
0 0 1 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 1
_

_
(9)
Following the steps (2)-(8), simultaneous Gauss elimination of the coefcient matrix and the n righthand
sides provides the following equivalent matrix equation
_

_
1 K
(1)
12
K
(1)
13
K
(1)
1n
0 1 K
(2)
23
K
(2)
2n
0 0 1 K
(3)
3n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 1
_

_
_

_
x
11
x
12
x
13
x
1n
x
21
x
22
x
23
x
2n
x
31
x
32
x
33
x
3n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
x
n1
x
n2
x
n3
x
nn
_

_
=
_

_
f
(1)
11
0 0 0
f
(2)
21
f
(2)
22
0 0
f
(3)
31
f
(3)
32
f
(3)
33
0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
f
(n)
n1
f
(n)
n2
f
(n)
n3
f
(n)
nn
_

_
(10)
As indicated the identity matrix on the righthand side is transformed into a lower triangular matrix F.
In the program the triangulation of the matrix K and the calculation of the matrix F is performed in a
matrix A of the dimension n 2n, which at the entry of the triangulation loop has the form
A =
_
KI

(11)
At exit from the triangulation loop the matrix A stores the triangulized stiffness at the position originally
occupied by K, and the matrix F at the position occupied by the unit matrix.
Calculation of L, D and (S
1
)
T
Using the Gauss factorization of the stiffness matrix (9) may be written, cf. (6-62)
KX = LDL
T
X = I
L
T
X = ID
1
L
1
= D
1
L
1
= F
(12)
Upon comparison of (10) and (12) it becomes clear that L
T
is stored as the coefcient matrix in (10),
whereas the righthand sides store the matrix F = D
1
L
1
. Since, L
1
is a lower triangular matrix with
ones in the main diagonal, the main diagonal must contain the main diagonal of D
1
. Hence,
A.4 Exercise 6.4: Theory 145
D
1
=
_

_
f
(1)
11
0 0 0
0 f
(2)
22
0 0
0 0 f
(3)
33
0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 f
(n)
nn
_

_
D =
_

_
1
f
(1)
11
0 0 0
0
1
f
(2)
22
0 0
0 0
1
f
(3)
33
0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0
1
f
(n)
nn
_

_
(13)
Finally, cf. (6-114)
S = LD
1
2
S
1
= D

1
2
L
1
= D
1
2
D
1
L
1
= D
1
2
F (14)
The matrices D and (S
1
)
T
are retrieved from the righthand sides of (10) as stored in the matrix F
according to the indicated relations at the end of the program.
146 Chapter A Solutions to Exercises
A.5 Exercise 7.1
Given the following mass- and stiffness matrices
M=
_

_
0 0 0
0 2 1
0 1 1
_

_
, K =
_

_
6 1 0
1 4 1
0 1 2
_

_
(1)
1. Perform a static condensation by the conventional procedure based on (7-5), (7-6), and next by
Rayleigh-Ritz analysis with the Ritz basis given by (7-62).
SOLUTIONS:
Question 1:
The 1st and 3rd row, and next the 1st and 3rd column of the are interchanged, which brings the matrices
of the general eigenvalue problem on the following form, cf. (7-1), (7-2)
M=
_
_
M
11
M
12
M
21
M
22
_
_
=
_

_
1 1 0
1 2 0
0 0 0
_

_
K =
_
_
K
11
K
12
K
21
K
22
_
_
=
_

_
2 1 0
1 4 1
0 1 6
_

_
_

_
(2)
Notice that the interchange of two rows or two columns may change the sign, but not the numerical value,
of the characteristic polynomial. However, since the characteristic polynomial is zero at the eigenvalue
the determination of the eigenvalues is unaffected by the sign change.
The reduced stiffness matrix (7-7) becomes

K
11
= K
11
K
12
K
1
22
K
21
=
_
2 1
1 4
_

_
0
1
_
[6]
1
_
0 1
_
=
_
2 1
1
23
6
_
(3)
The reduced eigenvalue problem (7-6) is solved
_
2 1
1
23
6
_

11
=
_
1 1
1 2
_

11

1
(4)
A.5 Exercise 7.1 147
The eigensolutions with eigenmodes normalized to modal mass 1 with respect to M
11
becomes

1
=
_

1
0
0
2
_
=
_
0.7325 0
0 9.1008
_
,
11
=
_

(1)
1

(2)
1

=
_
0.5320 1.3103
0.3892 0.9212
_
(5)
From (7-5) follows

21
=
_

(1)
2

(2)
2

= [6]
1
_
0 1
_
_
0.5320 1.3103
0.3892 0.9212
_
=
_
0.0649 0.1535
_
(6)
From (7-10) and (7-11) follows

2
= [
3
] = [] ,
12
=
_
0
0
_
,
22
= [1] (7)
After interchanging the degrees of freedom back to the original order (the 1st components of
11
and

12
are placed as the 3rd component of
(j)
, and the components of
21
and
22
are placed as the 1st
component
(j)
), the following eigensolution is obtained
=
_

1
0 0
0
2
0
0 0
3
_

_
=
_

_
0.7325 0 0
0 9.1008 0
0 0
_

_
=
_

(1)

(2)

(3)
_
=
_

_
0.0649 0.1535 1
0.3892 0.9212 0
0.5320 1.3103 0
_

_
_

_
(8)
Next, the same problem is solved by means of Rayleigh-Ritz analysis. The Ritz basis is constructed from
(7-62)

2
=
_

_
2 1 0
1 4 1
0 1 6
_

_
1
_

_
1 0
0 1
0 0
_

_
=
_

_
0.5750 0.1500
0.1500 0.3000
0.0250 0.0500
_

_
(9)
The projected mass and stiffness matrices become, cf. (7-63),(7-64)
148 Chapter A Solutions to Exercises

M=
_

_
0.5750 0.1500
0.1500 0.3000
0.0250 0.0500
_

_
T
_

_
1 1 0
1 2 0
0 0 0
_

_
_

_
0.5750 0.1500
0.1500 0.3000
0.0250 0.0500
_

_
=
_
0.548125 0.371250
0.371250 0.292500
_

K =
_

_
0.5750 0.1500
0.1500 0.3000
0.0250 0.0500
_

_
T
_

_
2 1 0
1 4 1
0 1 6
_

_
_

_
0.5750 0.1500
0.1500 0.3000
0.0250 0.0500
_

_
=
_
0.5750 0.1500
0.1500 0.3000
_
_

_
(10)
The eigensolutions to the eigenvalue problem dened by

Mand

K with modal masses normalized to 1
with respect to

Mbecome, cf. Box. 7.2
R =
_

1
0
0
2
_
=
_
0.7325 0
0 9.1008
_
, Q =
_
q
(1)
q
(2)
_
=
_
0.6748 3.5418
0.9599 4.8415
_
(11)
The solutions for the eigenvectors become, cf. (7-51)

=
_

(1)

(2)
_
=
_

_
0.5750 0.1500
0.1500 0.3000
0.0250 0.0500
_

_
_
0.6748 3.5418
0.9599 4.8415
_
=
_

_
0.5320 1.3103
0.3892 0.9212
0.0649 0.1535
_

_
(12)
As seen the eigenvalues (11) are identical to the lowest two eigenvalues from the static condensation
procedure (8). The two lowest eigenmodes in (8) are retrieved from (12) upon interchanging the 1st and
3rd components in the latter.
A.6 Exercise 7.2 149
A.6 Exercise 7.2
Given the following mass- and stiffness matrices
M=
_

_
2 0 0
0 2 1
0 1 1
_

_
, K =
_

_
6 1 0
1 4 1
0 1 2
_

_
(1)
1. Calculate approximate eigenvalues and eigenmodes by Rayleigh-Ritz analysis using the following
Ritz basis
= [
(1)

(2)
] =
_

_
1 1
1 1
1 1
_

_
SOLUTIONS:
Question 1:
The projected mass and stiffness matrices become, cf. (7-45)

M=
_

_
1 1
1 1
1 1
_

_
T
_

_
2 0 0
0 2 1
0 1 1
_

_
_

_
1 1
1 1
1 1
_

_
=
_
7 1
1 3
_

K =
_

_
1 1
1 1
1 1
_

_
T
_

_
6 1 0
1 4 1
0 1 2
_

_
_

_
1 1
1 1
1 1
_

_
=
_
8 4
4 16
_
_

_
(2)
The eigensolutions to the eigenvalue problem dened by

Mand

K with modal masses normalized to 1
with respect to

Mbecome, cf. Box. 7.2
R =
_

1
0
0
2
_
=
_
1.0459 0
0 5.3541
_
, Q =
_
q
(1)
q
(2)
_
=
_
0.3864 0.0269
0.0887 0.5849
_
(3)
The solutions for the eigenvectors become, cf. (7-51)

=
_

(1)

(2)
_
=
_

_
1 1
1 1
1 1
_

_
_
0.3864 0.0269
0.0887 0.5849
_
=
_

_
0.2976 0.5580
0.4751 0.6118
0.2976 0.5580
_

_
(4)
150 Chapter A Solutions to Exercises
The exact eigensolutions can be shown to be
=
_

1
0 0
0
2
0
0 0
3
_

_
=
_

_
0.7245 0 0
0 2.9652 0
0 0 9.3104
_

_
=
_

(1)

(2)

(3)
_
=
_

_
0.0853 0.6981 0.0458
0.3884 0.0486 0.5778
0.5251 0.1997 0.8149
_

_
_

_
(5)
As seen
1
and
2
are upperbounds to the exact eigenvalues
1
and
2
, and
2
is smaller than
3
, cf.
(7-57). The estimates of the eigenmodes are not useful. Not even the signs of the components of

(2)
are correctly represented. These poor results are obtained because the chosen Ritz basis is far away from
the basis spanned by
(1)
and
(2)
.
A.7 Exercise 7.3 151
A.7 Exercise 7.3
Consider the mass- and stiffness matrices in Exercise 7.2, and let
v =
_

_
1
1
1
_

_
(1)
1. Calculate the vector

(1)
= K
1
Mv, and next

1
=
_

(1)
_
, as approximate solutions to the
lowest eigenmode and eigenvalue.
2. Establish the error bound for the obtained approximation to the lowest eigenvalue.
SOLUTIONS:
Question 1:
From the given formula we calculate

(1)
=
_

_
6 1 0
1 4 1
0 1 2
_

_
1
_

_
2 0 0
0 2 1
0 1 1
_

_
_

_
1
1
1
_

_
=
_

_
0.55
1.30
1.65
_

_
(2)
The Rayleigh quotient based on

(1)
becomes, cf. (7-25)

1
=
_

(1)
_
=
_

_
0.55
1.30
1.65
_

_
T
_

_
6 1 0
1 4 1
0 1 2
_

_
_

_
0.55
1.30
1.65
_

_
_

_
0.55
1.30
1.65
_

_
T
_

_
2 0 0
0 2 1
0 1 1
_

_
_

_
0.55
1.30
1.65
_

_
= 0.7547 (3)
The obtained un-normalized eigenmode

(1)
resembles
(1)
much better than the corresponding ap-
proximation for

(1)
indicated in eq. (4) of Exercise 7.2. As a consequence the obtained eigenvalue

1
is a much better approximation to the exact eigenvalue
1
= 0.7245 given in eq. (5) of Exercise 7.2,
than the approximation
1
= 1.0459 obtained by the Rayleigh-Ritz analysis. The indicated formula for
obtaining

(1)
represents the 1st iteration step in the socalled inverse vector iteration algorithm described
in Section 8.2
152 Chapter A Solutions to Exercises
Question 2:
From (2) follows that

(1)

= 2.1714 (4)
The error vector becomes, cf. (7-79)

1
=
_
_
_
_

_
6 1 0
1 4 1
0 1 2
_

_
0.7547
_

_
2 0 0
0 2 1
0 1 1
_

_
_
_
_
_

_
0.55
1.30
1.65
_

_
=
_

_
1.1698
0.2075
0.2264
_

= 1.2095 (5)
The lowest eigenvalue of Mcan be shown to be

1
= 0.3820 (6)
//
Then, from (7-85) the following bound is obtained
|
1

1
|
1
0.3820

1.2095
2.1714
= 2.1714 (7)
(7-95)
Actually, |
1

1
| = |0.7245 0.7547| = 0.0302. Hence, the bounding method provides a rather crude
upperbound in the present case.
A.8 Exercise 8.1 153
A.8 Exercise 8.1
Given the following mass- and stiffness matrices
M=
_

_
2 0 0
0 2 1
0 1 1
_

_
, K =
_

_
6 1 0
1 4 1
0 1 2
_

_
(1)
1. Perform two inverse iterations, and then calculate an approximation to
1
.
2. Perform two forward iterations, and then calculate an approximation to
3
.
SOLUTIONS:
Question 1:
The calculations are performed with the start vector

0
=
_

_
1
1
1
_

_
(2)
The matrix A becomes, cf. (8-4)
A =
_

_
6 1 0
1 4 1
0 1 2
_

_
1
_

_
2 0 0
0 2 1
0 1 1
_

_
=
_

_
0.350 0.125 0.075
0.100 0.750 0.450
0.050 0.875 0.725
_

_
(3)
At the 1st and 2nd iteration steps the following calculations are performed, cf. Box 8.1
_

1
=
_

_
0.350 0.125 0.075
0.100 0.750 0.450
0.050 0.875 0.725
_

_
_

_
1
1
1
_

_
=
_

_
0.55
1.30
1.65
_

T
1
M

1
= 10.9975

1
=
1

10.9975
_

_
0.55
1.30
1.65
_

_
=
_

_
0.16585
0.39201
0.49755
_

_
(4)
154 Chapter A Solutions to Exercises
_

2
=
_

_
0.350 0.125 0.075
0.100 0.750 0.450
0.050 0.875 0.725
_

_
_

_
0.16585
0.39201
0.49755
_

_
=
_

_
0.14436
0.53449
0.71202
_

T
2
M

2
= 1.8812

2
=
1

1.8812
_

_
0.14436
0.53449
0.71202
_

_
=
_

_
0.10526
0.38970
0.51914
_

_
(5)
Since,
2
has been normalized to unit modal mass, so
T
2
M
2
= 1, an approximation is obtained from
the following Rayleigh fraction, cf. (7-25)

1
=
T
2
K
2
=
_

_
0.10526
0.38970
0.51914
_

_
T
_

_
6 1 0
1 4 1
0 1 2
_

_
_

_
0.10526
0.38970
0.51914
_

_
= 0.72629 (6)
The exact solution is
1
= 0.72446.
Question 2:
The calculations are performed with the start vector (2).
The matrix B becomes, cf. (8-35)
B =
_

_
2 0 0
0 2 1
0 1 1
_

_
1
_

_
6 1 0
1 4 1
0 1 2
_

_
=
_

_
3.0 0.5 0.0
1.0 5.0 3.0
1.0 6.0 5.0
_

_
(7)
At the 1st and 2nd iteration steps the following calculations are performed, cf. Box 8.3
_

1
=
_

_
3.0 0.5 0.0
1.0 5.0 3.0
1.0 6.0 5.0
_

_
_

_
1
1
1
_

_
=
_

_
2.5
1.0
0.0
_

T
1
M

1
= 14.5

1
=
1

14.5
_

_
2.5
1.0
0.0
_

_
=
_

_
0.65653
0.26261
0.00000
_

_
(8)
A.8 Exercise 8.1 155
_

2
=
_

_
3.0 0.5 0.0
1.0 5.0 3.0
1.0 6.0 5.0
_

_
_

_
0.65653
0.26261
0.00000
_

_
=
_

_
1.83829
0.65653
0.91915
_

T
1
M

1
= 7.25862

1
=
1

7.25862
_

_
1.83829
0.65653
0.91915
_

_
=
_

_
0.68232
0.24369
0.34116
_

_
(9)
Again,
2
has been normalized to unit modal mass, so
T
2
M
2
= 1, an approximation is obtained
from the following Rayleigh fraction

3
=
T
2
K
2
=
_

_
0.68232
0.24369
0.34116
_

_
T
_

_
6 1 0
1 4 1
0 1 2
_

_
_

_
0.68232
0.24369
0.34116
_

_
= 3.09739 (10)
The exact solution is
3
= 9.31036. The poor result is obtained because
2
is a rather bad approximation
to
(3)
.
156 Chapter A Solutions to Exercises
A.9 Exercise 8.2
Given the following mass- and stiffness matrices
M=
_

_
1
2
0 0
0 1 0
0 0
1
2
_

_
, K =
_

_
2 1 0
1 4 1
0 1 2
_

_
(1)
The eigenmodes
(1)
are
(3)
are known to be, cf. (6-54)

(1)
=
_

2
2

2
2

2
2
_

_
,
(3)
=
_

2
2

2
2

2
2
_

_
(2)
1. Calculate
(2)
by means of Gram-Schmidt orthogonalization, and calculate all eigenvalues.
SOLUTIONS:
Question 1:
Consider an arbitrary vector
x =
_

_
1
2
2
_

_
(3)
Since,
(1)
,
(2)
and
(3)
form a vector basis, we may write
x = c
1

(1)
+ c
2

(2)
+ c
3

(3)
(4)
In order to determine the expansion coefcient c
j
, (4) is premultiplied with
(j) T
M, and the M-
othonormality of the eigenmodes are used, i.e. that
(i) T
M
(j)
=
ij
. For j = 1, 3 the following
results are obtained
c
j
=
(j) T
Mx =
_

_
_

2
2

2
2

2
2
_

_
T _

_
1
2
0 0
0 1 0
0 0
1
2
_

_
_

_
1
2
2
_

_
=
7

2
4
, j = 1
_

2
2

2
2

2
2
_

_
T _

_
1
2
0 0
0 1 0
0 0
1
2
_

_
_

_
1
2
2
_

_
=

2
4
, j = 3
(5)
A.9 Exercise 8.2 157
Then, from (3), (4) and (5) follows
c
2

(2)
=
_

_
1
2
2
_

2
4

_

2
2

2
2

2
2
_

_
+

2
4

_

2
2

2
2

2
2
_

_
=
_

_
0.5
0.0
0.5
_

_

c
2
2
=
_

_
0.5
0.0
0.5
_

_
T _

_
1
2
0 0
0 1 0
0 0
1
2
_

_
_

_
0.5
0.0
0.5
_

_
= 0.25

(2)
=
1
0.5

_

_
0.5
0.0
0.5
_

_
=
_

_
1
0
1
_

_
(6)
Hence, the modal matrix becomes, cf. (6-54)
=
_

(1)

(2)

(3)
_
=
_

2
2
1

2
2

2
2
0

2
2

2
2
1

2
2
_

_
(7)
Given that all eigenmodes have been normalized to unit modal mass the eigenvalues may be calculated
from the Rayleigh quotient, cf. (7-25)
= M =
_

_
2 0 0
0 4 0
0 0 6
_

_
(8)
Generally, if n 1 eigenmodes to a general eigenvalue problem is known the remaining eigenmode can
lways be determined solely from the M-orthonormality conditions.
158 Chapter A Solutions to Exercises
A.10 Exercise 9.3
Given the following mass- and stiffness matrices
M=
_

_
2 0 0
0 2 1
0 1 1
_

_
, K =
_

_
6 1 0
1 4 1
0 1 2
_

_
(1)
1. Perform an initial transformation to a special eigenvalue problem, and calculate the eigenvalues and
eigenvectors by means of standard Jacobi iteration.
2. Calculate the eigenvalues and normalized eigenvectors by means of general Jacobi iteration oper-
ating on the original general eigenvalue problem.
SOLUTIONS:
Question 1:
Initially, a Choleski decomposition of the mass matrix is performed, cf. (6-109). As indicated by the
algorithm in Box 6.3 the following calculations are performed
s
11
=

m
11
=

2
s
21
=
m
21
s
11
=
0

2
= 0
s
31
=
m
31
s
11
=
0

2
= 0
s
22
=
_
m
22
s
2
21
=
_
2 0
2
=

2
s
32
=
1
s
22
(m
32
s
31
s
21
) =
1

2
(1 0 0) =

2
2
s
33
=
_
m
33
s
2
32
s
2
31
=

1
_

2
2
_
2
0
2
=

2
2
_

_
(2)
Hence, the matrices S and S
1
become
S =
_

2 0 0
0

2 0
0

2
2

2
2
_

_

S
1
=
_

2
2
0 0
0

2
2
0
0

2
2

2
_

_
(3)
A.10 Exercise 9.3 159
The initial value of the updated similarity transformation matrix and the stiffness matrix becomes, cf.
(6-112), (9-4)

0
= (S
1
)
T
=
_

2
2
0 0
0

2
2

2
2
0 0

2
_

_
K
0
=

K = S
1
K(S
1
)
T
=
_

2
2
0 0
0

2
2
0
0

2
2

2
_

_
_

_
6 1 0
1 4 1
0 1 2
_

_
_

2
2
0 0
0

2
2

2
2
0 0

2
_

_
=
_

_
3.0 0.5 0.5
0.5 2.0 3.0
0.5 3.0 8.0
_

_
_

_
(4)
In the 1st sweep the following calculations are performed for (i, j) = (1, 2) :
_

_
=
1
2
arctan
_
2 (0.5)
3.0 2.0
_
= 0.3927
_
cos = 0.9239
sin = 0.3827
P
0
=
_

_
0.9239 0.3827 0
0.3827 0.9239 0
0 0 1
_

1
=
0
P
0
=
_

_
0.6533 0.2706 0
0.2706 0.6533 0.7071
0 0 1.4142
_

_
K
1
= P
T
0
K
0
P
0
=
_

_
3.2071 0 1.6070
0 1.7929 2.5803
1.6070 2.5803 8.0000
_

_
(5)
160 Chapter A Solutions to Exercises
Next, the calculations are performed for (i, j) = (1, 3) :
_

_
=
1
2
arctan
_
2 1.6070
3.2071 8.0000
_
= 0.2958
_
cos = 0.9566
sin = 0.2915
P
1
=
_

_
0.9566 0 0.2915
0 1 0
0.2915 0 0.9566
_

2
=
1
P
1
=
_

_
0.6249 0.2706 0.1904
0.0527 0.6533 0.7553
0.4122 0 1.3528
_

_
K
2
= P
T
1
K
1
P
1
=
_

_
2.7165 0.7521 0
0.7521 1.7929 2.4682
0 2.4682 8.4906
_

_
(6)
Finally, to end the 1st sweep the calculations are performed for (i, j) = (2, 3) :
_

_
=
1
2
arctan
_
2 (2.4682)
1.7929 8.4906
_
= 0.3176
_
cos = 0.9500
sin = 0.3123
P
2
=
_

_
1 0 0
0 0.9500 0.3123
0 0.3123 0.9500
_

3
=
2
P
2
=
_

_
0.6249 0.3165 0.0964
0.0527 0.3848 0.9215
0.4122 0.4224 1.2852
_

_
K
3
= P
T
2
K
2
P
2
=
_

_
2.7165 0.7145 0.2349
0.7145 0.9816 0
0.2349 0 9.3019
_

_
(7)
At the end of the 2nd and 3rd sweep the following estimates are obtained for the modal matrix and the
eigenvalues
_

6
=
_

_
0.6980 0.0862 0.0729
0.0481 0.3885 0.9202
0.2004 0.5249 1.2978
_

_
, K
6
=
_

_
2.9652 0.0028 0.0000
0.0028 0.7245 0
0.0000 0 9.3104
_

9
=
_

_
0.6981 0.0853 0.0729
0.0486 0.3884 0.9202
0.1997 0.5251 1.2978
_

_
, K
9
=
_

_
2.9652 0.0000 0.0000
0.0000 0.7245 0
0.0000 0 9.3104
_

_
(8)
As seen the eigenmodes are stored column-wise in according to the permutation (j
1
, j
2
, j
3
) = (2, 1, 3),
cf. Box 9.2.
A.10 Exercise 9.3 161
Question 2:
The following initializations are introduced, cf. Box 9.4
M
0
= M=
_

_
2 0 0
0 2 1
0 1 1
_

_
, K
0
= K =
_

_
6 1 0
1 4 1
0 1 2
_

_
,
0
=
_

_
1 0 0
0 1 0
0 0 1
_

_
(9)
In the 1st sweep the following calculations are performed for (i, j) = (1, 2) :
_

_
a =
4 0 2 (1)
6 2 2 4
= 0.5
b =
6 0 2 (1)
6 2 2 4
= 0.5
_

_
= 0.4142
= 0.4142
P
0
=
_

_
1 0.4142 0
0.4142 1 0
0 0 1
_

_
,
1
=
0
P
0
=
_

_
1 0.4142 0
0.4142 1 0
0 0 1
_

_
M
1
= P
T
0
M
0
P
0
=
_

_
2.3431 0 0.4142
0 2.3431 1
0.4142 1 1
_

_
K
1
= P
T
0
K
0
P
0
=
_

_
7.5147 0 0.4142
0 4.2010 1
0.4142 1 2
_

_
(10)
162 Chapter A Solutions to Exercises
Next, the calculations are performed for (i, j) = (1, 3) :
_

_
a =
2 (0.4142) 1 0.4142
7.5147 1 2.3431 2
= 0.4393
b =
7.5147 (0.4142) 2.3431 0.4142
7.5147 1 2.3431 2
= 1.4437
_

_
= 1.0023
= 0.3050
P
1
=
_

_
1 0 0.3050
0 1 0
1.0023 0 1
_

_
,
2
=
1
P
1
=
_

_
1 0.4142 0.3050
0.4142 1 0.1263
1.0023 0 1
_

_
M
2
= P
T
1
M
1
P
1
=
_

_
2.5174 1.0023 0
1.0023 2.3431 1
0 1 1.4707
_

_
K
2
= P
T
1
K
1
P
1
=
_

_
10.3542 1.0023 0
1.0023 4.2010 1
0 1 2.4465
_

_
(11)
Finally, to end the 1st sweep the calculations are performed for (i, j) = (2, 3) :
_

_
a =
2.4465 1 1.4707 (1)
4.2010 1.4707 2.3407 2.4465
= 8.7838
b =
4.2010 1 2.3431 (1)
4.2010 1.4707 2.3431 2.4465
= 14.6745
_

_
= 1.2369
= 0.7404
P
2
=
_

_
1 0 0
0 1 0.7404
0 1.2369 1
_

_
,
3
=
2
P
2
=
_

_
1 0.7915 0.0016
0.4142 0.8437 0.8667
1.0023 1.2369 1
_

_
M
3
= P
T
2
M
2
P
2
=
_

_
2.5174 1.0023 0.7421
1.0023 2.1193 0
0.7421 0 4.2357
_

_
K
3
= P
T
2
K
2
P
2
=
_

_
10.3542 1.0023 0.7421
1.0023 10.4174 0
0.7421 0 3.2684
_

_
(12)
At the end of the 2nd and 3rd sweep the following estimates are obtained for the modal matrix and the
transformed mass and stiffness matrices
A.10 Exercise 9.3 163
_

6
=
_

_
1.6779 0.0129 0.1846
0.0350 1.3400 0.8282
0.3741 1.9075 1.1188
_

_
M
6
=
_

_
5.7469 0.1959 0.0118
0.1959 2.1179 0
0.0118 0 4.5448
_

_
, K
6
=
_

_
17.0856 0.1959 0.0118
0.1959 19.6067 0
0.0118 0 3.2925
_

9
=
_

_
1.6780 0.1060 0.1819
0.1169 1.3379 0.8281
0.4801 1.8869 1.1195
_

_
M
9
=
_

_
5.7769 0.0000 0.0000
0.0000 2.1139 0
0.0000 0 4.5448
_

_
, K
9
=
_

_
17.1296 0.0000 0.0000
0.0000 19.6810 0
0.0000 0 3.2925
_

_
(13)
Presuming that the process has converged after the 3rd sweep the eigenvalues and normalized eigenmodes
are next retrieved by the following calculations, cf. Box. 9.4
_

_
m= M
9
=
_

_
5.7769 0.0000 0.0000
0.0000 2.1139 0
0.0000 0 4.5448
_

_
, m

1
2
=
_

_
0.4161 0 0
0 0.6878 0
0 0 0.4691
_

_

=
_

2
0 0
0
3
0
0 0
1
_

_
= M
1
9
K
9
=
_

_
2.9652 0.0000 0.0000
0.0000 9.3104 0.0000
0.0000 0.0000 0.7245
_

_
=
_

(2)

(3)

(1)

=
9
m

1
2
=
_

_
0.6981 0.0729 0.0853
0.0486 0.9202 0.3884
0.1997 1.2978 0.5251
_

_
(14)
The solutions (14) are identical to those obtained in (8) for the special Jacobi iteration algorithm. In the
present case the eigenmodes are stored column-wise in according to the permutation (j
1
, j
2
, j
3
) =
(2, 3, 1), cf. Box 9.4. The convergence rates of the special nd the general Jacobi iteration algorithm
seems to be rather alike.
164 Chapter A Solutions to Exercises
A.11 Exercise 9.6
Given the following mass- and stiffness matrices
M=
_

_
2 0 0
0 2 1
0 1 1
_

_
, K =
_

_
6 1 0
1 4 1
0 1 2
_

_
(1)
1. Calculate the eigenvalues and normalized eigenvectors by means of QR iteration.
SOLUTIONS:
Question 1:
At rst, as indicated in Box 9.7 an initial similarity transformation of the indicated general eigenvalue
problem into a special eigenvalue problem is performed with the similarity transformation matrix P =
_
S
1
_
T
, where S is a solution to M= SS
T
. In case S is determined from an Choleski decomposition of
the mass matrix the initial updated transformation and stiffness matrices have been calculated in Exercise
9.3, eq. (4). The result becomes

1
= (S
1
)
T
=
_

2
2
0 0
0

2
2

2
2
0 0

2
_

_
K
1
= S
1
K(S
1
)
T
=
_

2
2
0 0
0

2
2
0
0

2
2

2
_

_
_

_
6 1 0
1 4 1
0 1 2
_

_
_

2
2
0 0
0

2
2

2
2
0 0

2
_

_
=
_

_
3.0 0.5 0.5
0.5 2.0 3.0
0.5 3.0 8.0
_

_
_

_
(2)
As seen the original three diagonal structure of K is destroyed by the similarity. This may be reestab-
lished by means of a Householder transformation as described in Section 9.4. However, this will be
omitted here, so the QR-iteration is performed on the full matrix K
1
.
At the determination of q
1
and r
11
in the 1st QR iteration the following calculations are performed, cf.
(9-67)
_

_
k
1
=
_

_
3.0
0.5
0.5
_

_
, r
11
=

_
3.0
0.5
0.5
_

= 3.0822
q
1
=
1
3.0822
_

_
3.0
0.5
0.5
_

_
=
_

_
0.9733
0.1622
0.1622
_

_
(3)
A.11 Exercise 9.6 165
q
2
and r
12
, r
22
are determined from the following calculations, cf. (9-68)
_

_
k
2
=
_

_
0.5
2.0
3.0
_

_
, r
12
=
_

_
0.9733
0.1622
0.1622
_

_
T
_

_
0.5
2.0
3.0
_

_
= 1.2978
r
22
=

_
0.5
2.0
3.0
_

_
+ 1.2978
_

_
0.9733
0.1622
0.1622
_

= 3.4009
q
2
=
1
3.4009
_
_
_
_

_
0.5
2.0
3.0
_

_
+ 1.2978
_

_
0.9733
0.1622
0.1622
_

_
_
_
_
=
_

_
0.2244
0.5262
0.8202
_

_
(4)
q
3
and r
13
, r
23
, r
33
are determined from the following calculations, cf. (9-69)
_

_
k
3
=
_

_
0.5
3.0
8.0
_

_
, r
13
= q
T
1
k
3
= 2.2711 , r
23
= q
T
2
k
3
= 8.0282
r
33
=

k
3
2.2711q
1
+ 8.0282q
2

= 1.9080
q
3
=
1
1.9080
_
k
3
2.2711q
1
+ 8.0282q
2
_
=
_

_
0.0477
0.8348
0.5486
_

_
(5)
166 Chapter A Solutions to Exercises
Then, at the end of the 1st iteration the following matrices are obtained
_

_
Q
1
=
_

_
0.9733 0.2244 0.0477
0.1622 0.5662 0.8348
0.1622 0.8202 0.5486
_

_
R
1
=
_

_
3.0822 1.2978 2.2711
0 3.4009 8.0282
0 0 1.9080
_

_
_

2
=
1
Q
1
=
_

_
0.6882 0.1587 0.0337
0.2294 0.9521 0.2024
0.2294 1.1600 0.7758
_

_
K
2
= R
1
Q
1
=
_

_
3.5789 1.8540 0.3095
1.8540 8.3744 1.5650
0.3095 1.5650 1.0466
_

_
_

_
(6)
The corresponding matrices after the 2nd and 3rd iteration become
_

_
Q
2
=
_

_
0.8853 0.4648 0.0115
0.4586 0.8689 0.1861
0.0766 0.1700 0.9825
_

_
R
2
=
_

_
4.0425 5.6020 1.0719
0 6.6809 1.3940
0 0 0.7405
_

_
_

3
=
2
Q
2
=
_

_
0.5391 0.4521 0.0706
0.6243 0.6862 0.3734
0.7945 1.0332 0.5489
_

_
K
3
= R
2
Q
2
=
_

_
6.2303 3.1708 0.0567
3.1708 6.0422 0.1259
0.0567 0.1259 0.7275
_

_
_

_
(7)
A.11 Exercise 9.6 167
_

_
Q
3
=
_

_
0.8912 0.4536 0.0021
0.4536 0.8610 0.0219
0.0081 0.0205 0.9998
_

_
R
3
=
_

_
6.9910 5.5673 0.1135
0 3.9475 0.1014
0 0 0.7274
_

_
_

4
=
3
Q
3
=
_

_
0.2760 0.6459 0.0816
0.8645 0.3206 0.3871
1.1811 0.5714 0.5277
_

_
K
4
= R
3
Q
3
=
_

_
8.5763 1.7913 0.0059
1.7913 3.5192 0.0148
0.0059 0.0148 0.7245
_

_
_

_
(8)
As seen from R
3
and K
4
the terms in the main diagonal have already after the 3rd iteration grouped
in descending magnitude, corresponding to the ordering of the eigenvalues at convergence indicated in
Box 9.7. Moreover, for both matrices convergence to the lowest eigenvalue
1
= 0.7245 has occurred,
illustrating the fact that the QR algorithm converge faster to the lowest eigenmode than to the highest.
The matrices after the 14th iteration become
_

_
Q
14
=
_

_
1.0000 0.0000 0.0000
0.0000 1.0000 0.0000
0.0000 0.0000 1.0000
_

_
R
14
=
_

_
9.3104 0.0000 0.0000
0 2.9652 0.0000
0 0 0.7245
_

_
_

15
=
14
Q
14
=
_

_
0.0729 0.6981 0.0853
0.9202 0.0486 0.3884
1.2978 0.1997 0.5251
_

_
K
15
= R
14
Q
14
=
_

_
9.3104 0.0000 0.0000
0.0000 2.9652 0.0000
0.0000 0.0000 0.7245
_

_
_

_
(9)
168 Chapter A Solutions to Exercises
Presuming that convergence has occurred after the 14th iteration the following solutions are obtained for
the eigenvalues and eigenmodes of the original general eigenvalue problem
=
_

3
0 0
0
2
0
0 0
1
_

_
= K
15
=
_

_
9.3104 0.0000 0.0000
0.0000 2.9652 0.0000
0.0000 0.0000 0.7245
_

_
=
_

(3)

(2)

(1)

=
15
=
_

_
0.0729 0.6981 0.0853
0.9202 0.0486 0.3884
1.2978 0.1997 0.5251
_

_
_

_
(10)
The solution (10) agrees with the corresponding solutions for the special and general Jacobi iteration
algorithms obtained in Exercise 9.3, eq. (8) and (14), respectively.
A.12 Exercise 10.1 169
A.12 Exercise 10.1
Given the following mass- and stiffness matrices
M=
_

_
2 0 0
0 2 1
0 1 1
_

_
, K =
_

_
6 1 0
1 4 1
0 1 2
_

_
(1)
1. Calculate the two lowest eigenmodes and corresponding eigenvalues by simultaneous inverse vec-
tor iteration with the start vector basis

0
=
_

(1)
0

(2)
0

=
_

_
1 1
1 0
1 1
_

_
SOLUTIONS:
Question 1:
The matrix A becomes, cf. (6-44)
A = K
1
M=
_

_
6 1 0
1 4 1
0 1 2
_

_
1
_

_
2 0 0
0 2 1
0 1 1
_

_
=
_

_
0.350 0.125 0.075
0.100 0.750 0.450
0.050 0.875 0.725
_

_
(2)
Then, the 1st iterated vector basis becomes, cf. (10-4)

1
=
_

(1)
1

(2)
1

= A
0
=
_

_
0.350 0.125 0.075
0.100 0.750 0.450
0.050 0.875 0.725
_

_
_

_
1 1
1 0
1 1
_

_
=
_

_
0.550 0.275
1.300 0.350
1.650 0.675
_

_
(3)
At the determination of
(1)
1
and r
11
in the 1st vector iteration the following calculations are performed,
cf. (10-13)
_

(1)
1
=
_

_
0.550
1.300
1.650
_

_
, r
11
=
_
_
_

(1)
1
_
_
_ =
_
_
_
_
_

_
0.550
1.300
1.650
_

_
T
_

_
2 0 0
0 2 1
0 1 1
_

_
_

_
0.550
1.300
1.650
_

_
_
_
_
_
1
2
= 3.3162

(1)
1
=
1
3.3162
_

_
0.550
1.300
1.650
_

_
=
_

_
0.1659
0.3920
0.4976
_

_
(4)
170 Chapter A Solutions to Exercises

(2)
1
and r
12
, r
22
are determined from the following calculations, cf. (10-15)
_

(2)
1
=
_

_
0.275
0.350
0.675
_

_
, r
12
=
_

_
0.1659
0.3920
0.4976
_

_
T
_

_
2 0 0
0 2 1
0 1 1
_

_
_

_
0.275
0.350
0.675
_

_
= 0.9578
r
22
=
_
_
_
_
_
_
_
_

_
0.275
0.350
0.675
_

_
+ 0.9578
_

_
0.1659
0.3920
0.4976
_

_
_
_
_
_
_
_
_
= 0.6380

(2)
1
=
1
0.6380
_
_
_
_

_
0.275
0.350
0.675
_

_
+ 0.9578
_

_
0.1659
0.3920
0.4976
_

_
_
_
_
=
_

_
0.6800
0.0399
0.3111
_

_
(5)
Then, at the end of the 1st iteration the following matrices are obtained
_

_
R
1
=
_
3.3162 0.9578
0 0.6380
_

1
=
_

_
0.1659 0.6800
0.3920 0.0399
0.4976 0.3111
_

_
(6)
The reader should verify that
1
R
1
=

1
. The corresponding matrices after the 2nd and 3rd iteration
become
_

_
R
2
=
_
1.3716 0.1507
0 0.3392
_

2
=
_

_
0.1053 0.6944
0.3897 0.0492
0.5191 0.2311
_

_
(7)
_

_
R
3
=
_
1.3798 0.0371
0 0.3374
_

3
=
_

_
0.0902 0.6972
0.3888 0.0496
0.5237 0.2086
_

_
(8)
Convergence of the eigenmodes with the indicated number of digits were achieved after 9 iterations,
where
A.12 Exercise 10.1 171
_

_
R
14
=
_
1.3803 0.0000
0 0.3372
_

9
=
_

_
0.0853 0.6981
0.3884 0.0486
0.5251 0.1997
_

_
(9)
Presuming that convergence has occurred after the 9th iteration the following eigenvalues are obtained
from (10-10) and (10-12)
=
_

1
0
0
2
_
=
T
9
K
9
= R
1

=
_
0.7245 0.0000
0.0000 2.9652
_
=
_

(1)

(2)

=
9
=
_

_
0.0853 0.6981
0.3884 0.0486
0.5251 0.1997
_

_
_

_
(10)
The solution (10) agrees with the corresponding solutions obtained in Excercises 9.3 and 9.6.
172 Chapter A Solutions to Exercises
A.13 Exercise 10.3
Given the following mass- and stiffness matrices
M=
_

_
2 0 0
0 2 1
0 1 1
_

_
, K =
_

_
6 1 0
1 4 1
0 1 2
_

_
(1)
1. Calculate the two lowest eigenmodes and corresponding eigenvalues by subspace iteration with the
start vector basis

0
=
_

(1)
0

(2)
0

=
_

_
1 1
1 0
1 1
_

_
SOLUTIONS:
Question 1:
The matrix A becomes, cf. (6-44)
A = K
1
M=
_

_
6 1 0
1 4 1
0 1 2
_

_
1
_

_
2 0 0
0 2 1
0 1 1
_

_
=
_

_
0.350 0.125 0.075
0.100 0.750 0.450
0.050 0.875 0.725
_

_
(2)
Then, the 1st iterated vector basis becomes, cf. (10-4)

1
=
_

(1)
1

(2)
1

= A
0
=
_

_
0.350 0.125 0.075
0.100 0.750 0.450
0.050 0.875 0.725
_

_
_

_
1 1
1 0
1 1
_

_
=
_

_
0.550 0.275
1.300 0.350
1.650 0.675
_

_
(3)
In order to perform the Rayleigh-Ritz analysis in the 1st subspace iteration the following projected mass
and stiffness matrices are calculated based on

1
, cf. (6-44), (10-20), (10-31)

M
1
=

T
1
M

1
=
_

_
0.550 0.275
1.300 0.350
1.650 0.675
_

_
T
_

_
2 0 0
0 2 1
0 1 1
_

_
_

_
0.550 0.275
1.300 0.350
1.650 0.675
_

_
=
_
10.998 3.1763
3.1763 1.3244
_

K
1
=

T
1
K

1
=
_

_
0.550 0.275
1.300 0.350
1.650 0.675
_

_
T
_

_
6 1 0
1 4 1
0 1 2
_

_
_

_
0.550 0.275
1.300 0.350
1.650 0.675
_

_
=
_
8.300 1.850
1.850 1.575
_
_

_
(4)
A.13 Exercise 10.3 173
The corresponding eigenvalue problem (10-30) becomes

K
1
Q
1
=

M
1
Q
1
R
1

_
8.300 1.850
1.850 1.575
_
_
q
(1)
1
q
(2)
1

=
_
10.998 3.1763
3.1763 1.3244
_
_
q
(1)
1
q
(2)
1

1,1
0
0
2,1
_

R
1
=
_
0.7246 0
0 2.9752
_
, Q
1
=
_
0.2471 0.4845
0.1813 1.5569
_
(5)
The estimate of the lowest eigenvectors after the 1st iteration becomes, cf. (10-34)

1
=

1
Q
1
=
_

_
0.550 0.275
1.300 0.350
1.650 0.675
_

_
_
0.2471 0.4845
0.1813 1.5569
_
=
_

_
0.0861 0.6947
0.3848 0.0850
0.5302 0.2514
_

_
(6)
Correspondingly, after the 2nd and 9th iteration steps the following matrices are calculated
R
2
=
_
0.7245 0
0 2.9662
_
, Q
2
=
_
0.7245 0.0013
0.0004 2.9673
_

2
=
_

_
0.0854 0.6972
0.3881 0.0603
0.5255 0.2162
_

_
_

_
(7)
R
9
=
_
0.7245 0
0 2.9652
_
, Q
9
=
_
0.7245 0.0000
0.0000 2.9652
_

9
=
_

_
0.0853 0.6981
0.3884 0.0486
0.5251 0.1997
_

_
_

_
(8)
The subspace iteration process converged with the indicated accuracy after 8 iterations.
Finally, it should be checked that th calculated eigenvalues are inded the lowest two by a Sturm sequence
or Gauss factorization check. The 2nd calculated eigenvalue becomes
2,9
= 2.9652. Then, let = 3.1
and perform a Gauss factorization of the matrix K3.1M, i.e.
174 Chapter A Solutions to Exercises
K3.1M=
_

_
0.2 1.0 0
1.0 2.2 4.1
0 4.1 1.1
_

_
=
LDL
T
=
_

_
1 0 0
5 1 0
0 1.4643 1
_

_
_

_
0.2 0 0
0 2.8 0
0 0 7.1036
_

_
_

_
1 5 0
0 1 1.4643
0 0 1
_

_
(9)
It follows that two components in the main diagonal of D are negative, from which is concluded that two
eigenvalues are smaller than = 3.1. In turn this means that the two eigensolutions obtained by (8) are
indeed the lowest two eigensolutions of the original system.
The solution (8) agrees with the corresponding solutions obtained in Excercises 9.3, 9.6 and 10.1.
A.14 Exercise 10.5 175
A.14 Exercise 10.5
Given the following mass- and stiffness matrices
M=
_

_
2 0 0
0 2 1
0 1 1
_

_
, K =
_

_
6 1 0
1 4 1
0 1 2
_

_
(1)
1. Calculate the 3rd eigenmode and eigenvalue by Sturm sequence iteration (telescope method).
SOLUTIONS:
Question 1:
At rst a calculation with = 2.5 is performed, which produces the following results
K2.5M=
_

_
1.0 1.0 0.0
1.0 1.0 3.5
0.0 3.5 0.5
_

_

_

_
P
(3)
(2.5) = 1 , sign(P
(3)
(2.5)) = +
P
(2)
(2.5) = 1.0 , sign(P
(2)
(2.5)) = +
P
(1)
(2.5) = 1.0 (1.0) (1.0)
2
= 2.0 , sign(P
(1)
(2.5)) =
P
(0)
(2.5) = 0.5 (2.0) (3.5)
2
(1.0) = 11.25 , sign(P
(0)
(2.5)) =
_

_
(2)
Hence, the sign sequence of the Sturm sequence becomes ++. corresponding to the number of sign
changes n
sign
= 1 in the sequence. One eigenvalue is smaller than = 2.5.
Similar calculations are performed for = 3.5, 4.5, . . . , 9.5
= 3.5 : Sign sequence = ++ + n
sign
= 2
= 4.5 : Sign sequence = ++ + n
sign
= 2
= 5.5 : Sign sequence = ++ + n
sign
= 2
= 6.5 : Sign sequence = ++ + n
sign
= 2
= 7.5 : Sign sequence = ++ + n
sign
= 2
= 8.5 : Sign sequence = ++ + n
sign
= 2
= 9.5 : Sign sequence = ++ n
sign
= 3
(3)
From this is concluded that the 3rd eigenvalue is placed somewhere in the interval 8.5 <
3
< 9.5.
Next, similar calculations are performed for = 8.6, 8.7, . . . , 9.4
176 Chapter A Solutions to Exercises
= 8.6 : Sign sequence = ++ + n
sign
= 2
= 8.7 : Sign sequence = ++ + n
sign
= 2
= 8.8 : Sign sequence = ++ + n
sign
= 2
= 8.9 : Sign sequence = ++ + n
sign
= 2
= 9.0 : Sign sequence = ++ + n
sign
= 2
= 9.1 : Sign sequence = ++ + n
sign
= 2
= 9.2 : Sign sequence = ++ + n
sign
= 2
= 9.3 : Sign sequence = ++ + n
sign
= 2
= 9.4 : Sign sequence = ++ n
sign
= 3
(4)
From this is concluded that the 3rd eigenvalue is conned to the interval 9.3 <
3
< 9.4.
Next, calculations are performed for = 9.31, 9.32, . . . , 9.39
= 9.31 : Sign sequence = ++ + n
sign
= 2
= 9.32 : Sign sequence = ++ n
sign
= 3
= 9.33 : Sign sequence = ++ n
sign
= 3
= 9.34 : Sign sequence = ++ n
sign
= 3
= 9.35 : Sign sequence = ++ n
sign
= 3
= 9.36 : Sign sequence = ++ n
sign
= 3
= 9.37 : Sign sequence = ++ n
sign
= 3
= 9.38 : Sign sequence = ++ n
sign
= 3
= 9.39 : Sign sequence = ++ n
sign
= 3
(5)
From this is concluded that the 3rd eigenvalue is conned to the interval 9.31 <
3
< 9.32.
Proceeding in this manner, it may be shown after totally 52 Sturm sequence calculations that the 3rd
eigenvalue is conned to the interval 9.31036 <
3
< 9.31037. Each extra digit requires 9 calculations.
Setting
3
9.310365, the linear equation (10-63) attains the form
_
K9.310365M
_

(3)
= 0
_

_
12.6207 1 0
1 14.6207 10.3104
0 10.3104 7.3104
_

_
_

(3)
1

(3)
2

(3)
3
_

_
=
_

_
0
0
0
_

_
(6)
Setting

(3)
1
= 1 the algorithm (10-64) now provides

(3)
2
=
(12.6207)
(1)
1 = 12.6207

(3)
3
=
(1)
(10.3104)
1
(14.6207)
(10.3104)
(12.6207) = 1
_

(3)
=
_

_
1
12.6207
17.7800
_

_
(7)
A.14 Exercise 10.5 177
Normalization to unit modal mass provides, cf. (6-54)

(3)
=
_

_
0.0729
0.9202
1.2978
_

_
(8)
The eigenvalue
3
9.310365 and the corresponding eigenmode
(3)
as given by (8) agree with the
corresponding results obtained in Excercises 9.3 and 9.6.

Potrebbero piacerti anche