Sei sulla pagina 1di 48

POLAR DECOMPOSITIONS, FACTOR ANALYSIS AND PROCRUSTES PROBLEMS IN FINITE DIMENSIONAL INDEFINITE SCALAR PRODUCT SPACES

ULRIC KINTZEL

Abstract. A criterion for the existence of H-polar decompositions based on comparing canonical forms is presented, and a numerical procedure is explained for computing H-polar decompositions of a matrix for which the product of itself with its H-adjoint is diagonalisable. Furthermore, the H-orthogonal or H-unitary procrustes problem is stated, and solved by application of H-polar de- compositions.

Key words. Indefinite scalar products, polar decompositions, factor analysis, procrustes prob- lems.

AMS subject classifications. 15A63, 15A23.

1. Introduction. Let F be the field of the real numbers R or of the complex numbers C and let F n be a ndimensional vector space over F. Furthermore, let

H be a fixed chosen regular symmetric or hermitian matrix of F n×n and let x =

(x 1 ,

, y n ) T be column vectors of F n . Then the bilinear or

sesquilinear functional

,

x n ) T , y

=

(y 1 ,

[x, y] = (Hx, y) where (x, y) =

n

α=1

x α y α

(y α = y α

if

F = R)

defines an indefinite scalar product in F n . Indefinite scalar products have almost all the properties of ordinary scalar products, except for the fact that the squared

norm [x, x] of a vector x

vector is called positive (space-like), negative (time-like) or neutral (isotropic, light-

like) respectively. If A is an arbitrary matrix of F n×n then its H-adjoint A [] is characterised by the property that

= 0 can be positive, negative or zero. A corresponding

[Ax, y] = [x, A [] y]

for all

x, y F n .

This is equivalent to the fact that between the H-adjoint A [] and the ordinary adjoint

A = A T the relationship

A [] = H 1 A H

exists. If in particular A [] = A or A H = HA, one speaks of an H-selfadjoint or H-symmetric or H-hermitian matrix, and an invertible matrix U with U [] = U 1 or U HU = H is called an H-isometry or H-orthogonal matrix or H-unitary matrix. If

A is a given matrix of F n×n , then a representation such as

A = UM with U HU = H and M H = HM

is called an H-polar decomposition of A.

Institut f¨ur Mathematik, MA 4-5, TU Berlin, Straße des 17. Juni 136, 10623 Berlin, Germany;

email:

UKintzel@aol.com

1

Decompositions of this kind have been investigated in detail in the publications [BMRRR1-3] and [MRR] as well as in the further references specified there. More spe- cialised results concerning polar decompositions of H-normal matrices, i.e. matrices which commute with their H-adjoint, are discussed in [LMMR]. H-polar decompositions are also the central subject of this paper, in which theo- retical as well as practical questions are discussed. For example, the more theoretical Chapter 3 is primarily concerned with finding a further criterion for the existence of H-polar decompositions, whereas the more practical Chapter 4 presents a numerical procedure for computing H-polar decompositions of a matrix A for the case in which the matrix A [] A is diagonalisable. In both chapters some statements are required concerning subspaces of the F n , which are first of all derived in Chapter 2, whereby some numerical questions are already examined in outlook for Chapter 4. In the final Chapter 5, two applications from a branch of mathematics known in psychology as factor analysis or multidimensional scaling, are ported into the environment of indef- inite scalar products. This involves on the one hand the task of constructing sets of points which take up given distances, and on the other hand the task of converging two sets of such points in the sense of the method of least squares or the procrustes problem 1 , which is achieved with the help of an H-polar decomposition. In a typical application of multidimensional scaling (MDS, for example see [BG]) test persons are first of all requested to estimate the dissimilarity (or similarity) of specified objects which are selected terms describing the subject of the analysis. For example, if professions are to be analysed, terms such as politician, journalist, physician, etc. can be used as the objects. In this way the comparison of N objects in pairs produces the similarity measures, called proximities, p kl , 1 k, l N, from which the distances d kl = f(p kl ) are then determined using a function f , for example f (x) = ax + b, which is called the MDS model. Using these distances, the coordinates of points x k in an ndimensional Euclidean space are constructed such that x k x l = d kl whereby . stands for the Euclidean norm. Thus each object is now represented by a point in a coordinates system and the data can be analysed with regard to their geometric properties. For example, it can thereby be attempted to interpret the basis vectors of the space in the sense of psychological factors, such as social status in the given example of professions. The results of interrogating the test persons are often categorised in M groups, e.g. according to gender and/or age, producing several descriptive constellations of

, 1 r M , in a Euclidean space of dimension n = max{n (r) } which

points x

must be mutually compared in the analysis. To make such a comparison of two con- stellations x k and y k possible, it is first of all necessary to compensate for irrelevant differences resulting from the different locations in space. This is done with an orthog-

onal transformation U devised such that k Ux k y k 2 is minimised. Thereafter the constellations x˜ k = Ux k and y k can be analysed. Thus the MDS model f is chosen in particular by adding a constant b (and by making further assumptions such as d kk = 0), so that the triangular inequality is fulfilled and therefore the points can be embedded in an Euclidean space [BG, Chapter 18]. This restriction is not required mathematically if a pseudo-Euclidean geometry is

(r)

k

1 Procrustes, a robber in Greek mythology, who lived near Eleusis in Attica. Originally he was called Damastes or Polypemon. He was given the name Procrustes (“the stretcher”) because he tortured his victims to fit them into a bed. If they were too tall, he chopped off their limbs or formed them with a hammer. If they were too small, he stretched them. He was overcome by Theseus who served him the same fate by chopping off his head to fit him into the bed.

2

admitted. This is the subject of the investigations in Chapter 5, where the stated task of constructing points from given distances and rotation data in the sense of optimum balancing in the environment of indefinite scalar products is considered. The following notation is used in the course of this work: The kernel, the image and the rank of a matrix A are designated ker A, im A and rank A respectively. If the matrix A is square, then tr A, det A and σ(A) are its trace, determinant and

= (A ) 1 = (A 1 ) is

spectrum respectively. Furthermore, the abbreviation A

used. The symbol 0 is used for zero vectors as well as for zero matrices. In some places it is additionally provided with size attributes 0 p,q F p×q or 0 p F p×p , whereby lower indices may also be intended as enumeration indices. This is evident from the respective context. I p , N p and Z p respectively designate the p × p identity

matrix, the p × p matrix with ones on the superdiagonal and otherwise zeros, and the p × p matrix with ones on the antidiagonal and otherwise zeros. In particular

J p (λ) = λI p + N p specifies an upper Jordan block for the eigenvalue λ. Moreover,

A 1

and diag(α 1 ,

Even when no further specifications are made, a regular (real) symmetric or (complex) hermitian matrix is always meant by H, and instead of A [] sometimes A H is written to specify the matrix on which the scalar product is based. Lastly, the direct sum of two subspaces X, Y F n is denoted by X Y .

2. Subspaces. The properties of subspaces of an indefinite scalar product space

over the field of the real or complex numbers are discussed in detail in [GLR, Chapter

I.1]. In this chapter some additional properties are described which are required in the course of the further considerations. Let F = R or F = C and let [., .] be an indefinite scalar product of F n with the underlying regular symmetric or hermitian matrix H F n×n . A subspace M F n is said to be positive (non-negative, neutral, non-positive, negative) if

A k stands for the block diagonal matrix consisting of the specified blocks,

, α k ) stands for a diagonal matrix with the specified diagonal elements.

[x, x] > 0

([x, x] 0,

[x, x] = 0,

[x, x] 0,

[x, x] < 0)

is satisfied for all 0

= x M . The set defined by

M [] = {x F n : [x, y] = 0 for all y M }

is also a subspace of F n and is termed the H-orthogonal companion of M , for which the important equations

(M [] ) [] = M and dim M + dim M [] = n

hold. A subspace M is called non-degenerate if x M and [x, y] = 0 for all y M imply that x = 0, otherwise M is called degenerate. Furthermore, the equations

M M [] = {0} and M M [] = F n

are satisfied if and only if M is non-degenerate. (This is an essential difference com- pared with the ordinary scalar product, for which these equations are always fulfilled for the ordinary orthogonal complement M .) It is now shown in [GLR, Theorem 1.4] that every non-negative (non-positive) subspace is a direct sum of a positive (neg- ative) and a neutral subspace. However, the following more general theorem, whose proof is based on statements made in [GR, Chapter IX, §2], is also true. Theorem 2.1 (Decomposition of subspaces).

3

1. Every non-degenerate subspace M F n can be expressed as a direct sum M = M + M whereby M + is positive, M is negative and both spaces are H-orthogonal. 2. Every subspace M F n can be expressed as a direct sum M = M 0 M 1 whereby M 0 is neutral, M 1 is non-degenerate and both spaces are H- orthogonal. Proof. 1. Assume that M + is a positive subspace of M with maximum dimension.

[]

Then M + is non-degenerate and M + M

+

[]

M + (M M

+

) = M,

= F n . Thus a representation

[]

M = M M

+

,

exists with two H-orthogonal summands, and it remains to show that M is negative. Suppose that a vector x M exists with [x, x] > 0. Then it would follow that [x + y, x + y] = [x, x] + [y, y] > 0 for all y M + . But this would mean that the subspace M + span{x} is also positive, in contradiction to the maximality of M + . Thus M is non-positive and the Schwarz inequality [GLR, Chapter I.1.3] 2

|[x, y]| 2 [x, x][y, y]

can be applied. Now assume that x 0 M with [x 0, x 0 ] = 0. Then the Schwarz inequality shows that [x 0, x] = 0 must be fulfilled for all x M . Since it is also true that [x 0, y] = 0 for all y M + , it follows that [x 0, z] = 0 for all z M . Thus x 0 = 0, because M is non-degenerate.

Then M 0 is neutral, because if a vector x M 0 M

= 0, it would follow that x / M [] M 0 . Now let M 1 be a

were to exist with [x, x]

complementary subspace, so that

2.

Let M 0 = M M [] .

(M M [] ) M 1 = M,

M 0 = M M [] ,

with two H-orthogonal summands applies. To show that M 1 is non-degenerate, let

x 0 M 1 with [x 0 , x] = 0 for all x M 1 . Furthermore, [x 0 , y] = 0 for all y M 0 , so

Thus it follows that x 0 M 1 and x 0 M 0 , so that

x 0 = 0.

On combining the two statements of the theorem, it is clear that every subspace M F n can be expressed in the form

that [x 0 , z] = 0 for all z M .

in the form that [ x 0 , z ] = 0 for all z ∈

M = M + M M 0

with a positive, a negative and a neutral - mutually H-orthogonal - subspace. In order to deduce the dimensions of these spaces, we can refer to the following classical result [GR, Chapter IX, §2].

, x m } be a

Remark 2.2 (Projection onto subspaces). Let M = span{x 1 , subspace of F n . Then every vector y M ,

y =

m

µ=1

η

µ x µ ,

can be represented uniquely by its coordinates y˜ = (η 1 ,

to the given basis of M . Now if X = [x 1

, η m ) T F m with respect

x m ] F n×m is a matrix whose columns

2 There is a typing error contained in equation (1.8): It must be read |(Hy, z)| 2 (Hy, y)(Hz, z).

4

˜

are the basis vectors, then y = Xy˜ and for H = X HX F m×m we obtain

˜

(Hy, z) n = (HXy˜, Xz˜) n = (X HXy˜, z˜) m = ( Hy˜, z˜) m where

(x, y) k =

k

α=1

x α y α .

Consequently all properties of the non-degenerate scalar product H : F n × F n F in the subspace M can be studied with the help of the possibly degenerate scalar

˜

˜

product H : F m × F m F. In particular, if σ( H) contains p positive and q negative eigenvalues, and if r = m p q is the multiplicity of the eigenvalue 0, then for the

decomposition of M described above,

dim M + = p,

dim M = q,

dim M 0 = r.

The dimensions of the subspaces are uniquely determined. This is a consequence of

Sylvester’s law of inertia, according to which the numbers of positive, negative and

˜

vanishing elements are invariant in all diagonal representations of H. Furthermore,

˜

the subspace M 0 is uniquely determined by the nullspace ker H of the degenerate

˜

scalar product. If M is ultimately a non-degenerate subspace, then det H

r = 0, and the maximum dimension of a neutral subspace of M is given by min(p, q).

This is not explicitly shown here, but the proof is given in [GLR, Theorem 1.5]. The following theorem describes the interesting fact that a single subspace induces

a decomposition of the entire space into four complementary subspaces.

Theorem 2.3 (Decomposition of the space). Let M F n . Then four subspaces

= 0, i.e.

M 1 , M 2 , M 0 , M

0

of F n exist with the following properties:

1. F n = M 0 M 1 M 2 with M 0 = M 0 M 0 .

2. M 0 = M M [] and M = M 1 M 0 as well as M [] = M 2 M

3. M 0 , M 1 , M 2 are non-degenerate and H-orthogonal in pairs.

4.

0

.

0

are neutral and dim M 0 = dim M

0 . Proof. Suppose that M 1 and M 2 are the complements of M 0 which exist according

to Theorem 2.1 and for which the assertion 2. is fulfilled. Then M 1 M and

M 2 M [] are H-orthogonal and non-degenerate, so that M 1 M 2 , too, is non-

degenerate. Consequently

M 0 , M

F n = (M 1 M 2 ) (M 1 M 2 ) []

and, moreover, M 0 (M 1 M 2 ) [] . If it is now chosen that M 0 = (M 1 M 2 ) [] =

M 0 M

0

, then the assertions 1. and 3. are fulfilled too. From

F n = (M 1 M 2 M 0 ) M

0

= (M + M [] ) M

0

it furthermore follows that

dim M

0

whereby the equations

= n dim(M + M [] )

= n (dim M + dim M [] dim(M M [] ))

= dim M

0

,

dim M + dim N = dim(M + N ) + dim(M N ) and dim M + dim M [] = n,

5

which are valid for all subspaces, have been applied [GR, Chapter I.1.21], [GLR, Chapter I.1.2].

. Then M 0

is a 2rdimensional non-degenerate subspace of F n , which can be split according

to Theorem 2.1 into a positive and a negative subspace M 0 = M

p

subspace M 0 it follows that r min(p, q) and thus p r and q r [GLR, Theorem 1.5]. On the other hand p + q = 2r, so that p = q = r. Thus for the subspace M 0 the representations

. Since M 0 must contain the rdimensional neutral

Let

It remains to show that M

0

+

= dim M

0

and q = dim M

0

is neutral. Let r = dim M 0 = dim M

0

+

0

M

0

.

+

M 0 = M

0

M

0

+

dim M

0

= dim M

0

= M 0 M

0

exist with

= dim M 0 = dim M

0

and H-orthogonal spaces M

= span{x +

+

0

+

M

0

, x

+

r

},

1

,

, M

, so that the three bases

0

M

0

= span{x

1

,

, x

r

}, M 0 = span{x

1

,

, x

r }

with

[x + , x

k

+

l

] > 0,

[x k , x

l

] < 0,

[x + , x

k

l

] = 0

and

[x k , x ] = 0

l

for

1 k, l r

can now be chosen. Since M

also be true that M

+ is positive, M

0

0

is negative and M 0 is neutral, it must

+

0

M 0 = M

0

M 0 = {0}, so that each basis vector of M 0 can

be expressed in the form

x

k =

r

i=1

α

ki x

+

i

+

r

i=1

β

ki x

i

with (α k1 ,

Furthermore, the vectors defined by

x˜ + =

k

r

i=1

α ki x

+ i and x˜

,

α kr ) T = 0, (β k1 ,

k =

r

i=1

β ki x

i

,

β kr ) T = 0.

can be used as new basis vectors for M

λ r )

λ r x r = λ 1 (x˜ + + x˜ ) +

M

constants (λ 1 ,

0

, M

0

+

r

+

, because if it is assumed that the

+

and thus

=

0 with λ 1 x˜ + +

1

+

λ r (x˜

+

r

+

x˜

r

)

,

λ r x˜

= λ 1 x˜ 1 +

= 0 exist, then 0

+

1

λ r x˜

,

r

M

0

= λ 1 x

,

1

1 +

1

0

M

0

= {0}. The linear independence of the vectors x˜

, x˜ r can be shown

analogously. Finally, defining

x

k

=

x˜ + x˜

k

k

for

1 k r

and M

0

= span{x 1 ,

, x

r

},

then M

0

is on the one hand a neutral subspace because

[x , x ] = [x˜ +

k

l

k

=

[x˜ + +

k

x˜

x˜

k

k

, x˜

, x˜

+

l

+

l

x˜

+ x˜

l

l

] = [x˜ + , x˜

k

+

l

] + [x˜

] = [x k , x ] = 0

l

k

, x˜

l

]

and on the other hand

+

M 0 = M

0

M

0

=

=

=

span{x˜ +

1

,

, x˜

,

span{x˜ + + x˜

1

1

M 0 M 0 ,

+

r

}

span{x˜

, x˜

}

+

1

+

r

x˜

r

6

,

}

span{x˜ + x˜

, x˜

r

1

1

,

, x˜

+

r

x˜

r

}

so that the 4th assertion of the theorem is fulfilled too.

Whereas the statements have been proved so far without reference to particular bases, we will also continue to use H-orthogonal bases. The following two theorems contain corresponding generalisations of the Schmidt orthonormalisation method. They will in particular be used to construct H-orthogonal bases of eigenspaces. Theorem 2.4 (Orthonormalisation of bases). Let F = R or F = C and let X be

, u m } of X such

a subspace of F n with dim X = m. Then there exists a basis {u 1 , that

dim X = m . Then there exists a basis { u 1 , that [

[u k, u l ] = k δ kl , k =

+1,

1,

0,

for for p + 1

1 k p k p + q

for p + q + 1

k

p + q + r

and p + q + r = m. In particular, if X is non-degenerate, then r = 0.

Proof. (Complete induction) Let X initially be non-degenerate and let {x 1 ,

, , m} such that |[x k , x l ]| is

= 0, because otherwise X would

x m } be a basis of X. Also let k, l be two indices of {1,

maximised. Then it necessarily follows that [x k , x l ]

be degenerate. For the case k = l let the basis which is obtained by interchanging

x 1 and x k still be designated as {x 1 , identities which are valid for all x, y F n ,

, x m }. Otherwise, on account of the polar

[x, y] = 1 4 [x + y, x + y] [x y, x y]

[x, y] =

if

1

4 [x + y, x + y] [x y, x y]

i

+ 4 [x + iy, x + iy] [x iy, x iy]

F = R

and

if

F = C,

the following selection can be made: Let

z + = x k + x l ,

z + = x k + ix l ,

z = x k x l ,

z = x k ix l ,

= x˜ k = z ,

x˜

,

k

z

+

x˜ l =

x˜ l = z + ,

,

z

if F = R or F = C and | Re[x k , x l ]| ≥ | Im[x k , x l ]| otherwise

if |[z + , z + ]| ≥ |[z , z ]| otherwise

.

and

Then span{x˜ k , x˜ l } = span{x k , x l } and [x˜ k , x˜ k ]

by replacing x k , x l with x˜ k , x˜ l and then exchanging x 1 and x˜ k still be designated as

{x 1 ,

= 0. Let the particular basis obtained

, x m }. If now we make

u 1 = x 1 / |[x 1 , x 1 ]|

and

1 = sig[x 1 , x 1 ] ∈ {+1, 1},

then [u 1 , u 1 ] = [x 1 , x 1 ]/|[x 1 , x 1 ]| = 1 and for the vectors defined by

x i = x i 1 [x i , u 1 ]u 1

for

2 i m

as

, x m }, so

that X too is nondegenerate. Now according to the induction hypothesis there exists

, u m }

a

we obtain [x i , u 1 ] = [x i , u 1 ]

direct sum of its H-orthogonal subspaces span{u 1 } and X = span{x 2 ,

1 [x i , u 1 ][u 1 , u 1 ] = 0.

Thus X can be expressed

basis {u 2 ,

, u m } of X with the demanded properties, so that finally {u 1 ,

is the wanted basis of X, if a suitable sorting is also made in the case of 1 = 1.

If X is a degenerate subspace, the same construction can be applied, but it then

7

terminates after a certain number of steps, namely when no more non-vanishing scalar products can be found. The remaining r vectors x i thus satisfy [x i , x j ] = 0 for

r + 1 i, j m. Theorem 2.5 (Orthonormalisation of pairs of bases). Let F = R or F = C and

m

of pairs of bases ). Let F = R or F = C and m let

let X , Y be two neutral subspaces of F n with dim X = dim Y = m and X Y = {0}.

Then there exists a basis {u 1 ,

, u m } of X and a basis {v 1 ,

, v m } of Y such that

[u k, v l ] = k δ kl ,

k =

1,

0,

for 1 k p

k p + r

for p + 1

and p + r = m. In particular, if X Y is non-degenerate, then r = 0. Proof. (Complete induction) Let X Y initially be non-degenerate and let {x 1 ,

, y m } be a basis of Y . Also let k, l be two

, m} so that |[x k , y l ]| is maximised. Then it necessarily follows that

[x k , y l ] = 0, because otherwise X Y would be degenerate. Let the particular bases

, x m }

and {y 1 ,

λ 1 = [x 1 , 1 y 1 ] > 0 and let

, y m } respectively. In the case F = R now let 1 ∈ {+1, 1} such that

obtained by exchanging x 1 and x k or y 1 and y l still be designated as{x 1 ,

indices from {1,

, x m } be a basis of

X

and {y 1 ,

u 1 = x 1 / λ 1

as well as

v 1 = 1 y 1 / λ 1 ,

so

that

[u 1 , v 1 ] = [x 1 , 1 y 1 ]/[x 1 , 1 y 1 ] = 1; in the case F = C let λ 1 = [x 1 , y 1 ] and let

 

u 1 = x 1 1

as well as

v 1 = y 1 1

where ω

1 2 = λ 1

so

that

[u 1 , v 1 ] = [x 1 , y 1 ]/[x 1 , y 1 ] = 1. For

the vectors defined by

x i = x i [x i , v 1 ]u 1

as well as

y i = y i [y i , u 1 ]v 1

for

2 i m

we obtain [x i , v 1 ] = [x i , v 1 ] [x i , v 1 ][u 1 , v 1 ] = 0 and [y i , u 1 ] = [y i , u 1 ] [y i , u 1 ] × [v 1 , u 1 ] = 0. Thus X Y can be expressed as direct sum of its H-orthogonal subspaces

span{u 1 , v 1 } and X Y = span{x 2 ,

Y , too, is non-degenerate. Now according to the induction hypothesis, two bases

{u 2 ,

so that finally {u 1 ,

demanded properties,

so that X

,

x m } ⊕ span{y

2

,

,

y m },

,

u m } and {v 2 ,

, v m } of X and Y exist with the

, u m } and {v 1 ,

, v m } are the wanted bases of X and Y . If

X Y is a degenerate subspace, the same construction can be applied, but it then

terminates after a certain number of steps, namely when no more nonvanishing scalar products can be found. The remaining 2r vectors x i , y i thus satisfy [x i , y j ] = 0 for

m

The proofs of

thus satisfy [ x i , y j ] = 0 for m The proofs of

r + 1 i, j m. Numerical Procedure 2.6 (Orthonormalisation of bases).

the Theorems 2.4 and 2.5 were formulated by choice of the maximised scalar product (pivoting) such that they can be implemented directly as stable numerical methods.

To limit the coordinates of the neutral basis vectors

u p+q+1 ,

, u m or u p+1 ,

,

u m and v p+1 ,

, v m

they should additionally be brought to the Euclidean length x = (x, x) = 1 after completing the orthogonalisation process. Furthermore, the normalisation in the method of Theorem 2.5 can be modified by a factor α

u 1 = α x 1 ,

λ 1

α

u 1 = ω 1 x 1 ,

v 1 =

1 λ 1 y 1

1

α

v 1 = αω 1 y 1

(F = R)

or

(F = C).

8

In particular, if the choice α = y 1 / x 1 is made, then u 1 = v 1 is ensured, which is found to be particularly advantageous in practice. This can be demonstrated

with the following example in which A = tr(A A) denotes the Frobenius Norm and cond A = A A 1 denotes the condition number of a matrix A:

Let H = diag(1, 1) and x = x x T , y = y y T with x, y C\{0}. Then

X

= span{x} and Y = span{y} are neutral subspaces of equal dimension such that

X

Y = {0} and Theorem 2.5 can be applied. Let λ = [x, y] = 2xy and let ω be one

of the two square roots of λ. Then the columns [x y ] of the matrix X 1 where

X 1 =

 x y ω   ω  x − y ω ω
x y
ω
  ω
x
− y
ω
ω

 

,

i.e. X 1

1

=

|ω| 2

2xy

 y y ω ω   x  ω − x ω
y
y
ω
ω
 
x 
ω
− x
ω

,

are the vectors obtained by orthonormalisation without modification and it is true that

cond X 1 = |x| 2 + |y| 2

|x||y|

because

X 1 2 = 2(|x| 2 + |y| 2 )

|ω| 2

, X 1

1

2 = |ω| 2 (|x| 2 + |y| 2 )

2|x| 2 |y| 2

.

If now α = y / x = |y|/|x|, then the columns [x y ] of the matrix X 2 where

X 2 =

 αx y  αω ω  y  − αx ω αω
 αx
y
 αω
ω
y
αx
ω
αω

 

,

i.e.

X

1

2

=

|ω| 2

2xy

 

y

y

 , i.e. X − 1 2 = | ω | 2 2 xy  

αω

αx

ω

αω

αx

ω

 

,

are the vectors obtained by orthonormalisation with modification and it is true that

X 2 2 =

X 2 = α 4 |x| 2 + |y| 2

, X 1

1

cond

α 2 |x||y| 2(α 4 |x| 2 + |y| 2 )

α 2 |ω| 2

= 2 because

2 = |ω| 2 (α 4 |x| 2 + |y| 2 )

2α 2 |x| 2 |y| 2

.

But for arbitrary real numbers a, b we find that 0 (a b) 2 = a 2 2ab + b 2 or 2