Sei sulla pagina 1di 25

Introduction to tensors

Joakim Strandberg
July 2005

1 Contravariant and covariant vectors


1.1 The definitions of contravariant and covariant components
The purpose of this text is to be an introductory text to what tensors are and where
they come from. To achieve this goal understanding of what contravariant and covariant
vectors are is essential. Therefore this text begins with the definitions of these concepts.

Definition of contravariant components of a vector The contravariant compo-


nents (a1 , a2 , . . . , an ) of a n-dimensional vector v with respect to the linearly independent
vectors x1 , x2 , . . . , xn (called coordinate axes) are defined by:

v = a1 x1 + a2 x2 + · · · + an xn (1)

Definition of covariant components of a vector The covariant components (b1 , b2 ,


. . . , bn ) of a n-dimensional vector v with respect to the linearly independent vectors
x1 , x2 , . . . , xn (called coordinate axes) are defined by:

b1 = v · x1
b2 = v · x2
.. .. (2)
. .
bn = v · xn

Why the components (a1 , a2 , . . . , an ) are called contravariant and (b1 , b2 , . . . , bn ) are
called covariant is explained on page 2. Problem 1 illustrates how to compute the con-
travariant and covariant components of a given vector.

Problem 1 Given the vectors x1 = (1, 0), x2 = (1, 2) and v = (1, 2).
a) Determine the contravariant components (a1 , a2 ) of v with respect to the vectors
x1 and x2 .

b) Determine the covariant components (b1 , b2 ) of v with respect to the vectors x1


and x2 .

Solution 1a The vectors x1 and x2 are clearly independent. Using the definition of
contravariant components of a vector

v = a1 x1 + a2 x2 (3)

1
     
1 1 1 1 2
= a + a (4)
2 0 2
This is an equation system    1  
1 1 a 1
= (5)
0 2 a2 2
Using Cramer’s rule
1 1

2 2 2−2
1
a = = =0 (6)
1 1
2
0 2

1 1

0 2 2
2
a = = =1 (7)
2 2
Thus (a1 , a2 ) = (0, 1).

Solution 1b Using the definition of covariant components

b1 = v · x1 = (1, 2) · (1, 0) = 1 (8)

b2 = v · x2 = (1, 2) · (1, 2) = 5 (9)


Thus (b1 , b2 ) = (1, 5).
Notice that in this example (a1 , a2 ) 6= (b1 , b2 ).

Important! Generally the contravariant and covariant components of a vector are dif-
ferent. But when the “coordinate axes” x1 , . . . , xn are orthonormal (they all have unit
length and and are orthogonal) there is no difference between contravariant and covariant
components!

1.2 Why the components are called contravariant and covariant


Problem 2 Why are the components of a vector v defined by (1) and (2) called con-
travariant and covariant components?

Solution 2 If the coordinate axes are scaled by a factor of c, then the contravariant
components become c times smaller but the covariant components become c times greater.
Thus the contravariant components of a vector scale against the coordinate axes and the
covariant components scale with. In Latin contra means against and co means with.
Thereof the names contravariant and covariant components. Problem 3 illustrates this.

Problem 3 Given the vectors x1 = (c, 0), x2 = (c, 2c) and v = (1, 2).

a) Determine the contravariant components (a1 , a2 ) of v with respect to the vectors


x1 and x2 .
b) Determine the covariant components (b1 , b2 ) of v with respect to the vectors x1
and x2 .

2
Solution 3a The equation system now becomes
   1  
c c a 1
= (10)
0 2c a2 2

Using Cramer’s rule


1 c

2 2c 2c − 2c
1
a = = =0 (11)
c c 2c2
0 2c

c 1

0 2 2c 1
2
a = = 2 = (12)
2c2 2c c
Thus (a1 , a2 ) = 1c (0, 1).

Solution 3b Using the definition of covariant components

b1 = v · x1 = (1, 2) · (c, 0) = c (13)

b2 = v · x2 = (1, 2) · (c, 2c) = 5c (14)


Thus (b1 , b2 ) = c(1, 5).

1.3 Why the contravariant and covariant components are inter-


esting
Problem 4 Why are only the definitions (1) and (2) of the components of a vector
interesting? For example, it’s possible to define components

c1 = v · x1 /|x1 |
c2 = v · x2 /|x2 |
. . (15)
.
. .
.
cn = v · xn /|xn |

Why not also give the components (c1 , c2 , . . . , cn ) a name? Why are the contravariant
and covariant components so “important” that they have been given names?

Solution 4 When parametrizations of the Euclidean plane and tensors of first order
have been explained, it is shown on page 7 how the definition of work in physics give rise
to contravariant and covariant components.

Problem 5 What is the definition of the Euclidean plane?

Solution 5 Intuitively the Euclidean plane can be imagined as a flat piece of paper.
In this paper one can introduce two orthogonal coordinate axes usually denoted x- and
y-axes. Two numbers x and y can then be measured to describe the position of a point.
How is a point defined? Instead of saying that the two numbers describe the position of
a point, the two numbers are defined to be a point.

3
Definition of the Euclidean plane The Euclidean plane is defined as the set of all
ordered 2-tuples r = (x, y) called points where x and y are real numbers.

That the 2-tuples are ordered means that (x, y) 6= (y, x) ⇔ x 6= y.

Definition of a parametrization of the Euclidean plane A parametrization of


the Euclidean plane is a specification of unique coordinates (u1 , u2 ) (where u1 and u2 are
real) to every point in the plane.

r(u1 , u2 ) = (x(u1 , u2 ), y(u1 , u2 )) (16)

This text does not consider all possible parametrizations but restricts itself to allowable
parametrizations.

Important! Notice that coordinates (in a parametrization) are NOT written with sub-
scripts (u1 , u2 ) but with superscripts (u1 , u2 ). The 2 reasons for this will be explained
on page ???. It’s not explained here in the text because the explanation requires the
Einstein summation convention and contravariant tensors of first order.

Definition of an allowable parametrization of the Euclidean plane An allowable


parametrization of the Euclidean plane satisfies:
i) Every point in the Euclidean plane is given unique coordinates (u1 , u2 ) where
(u1 , u2 ∈ R)
r(u1 , u2 ) = (x(u1 , u2 ), y(u1 , u2 )) (17)
∂r ∂r
ii) The vectors ∂u1 and ∂u2 are defined everywhere, linearly independent and 6= 0.
Some examples of allowable parametrizations:

Problem 6 Determine which of the following parametrizations which are allowable para-
metrizations.
a) r = (u1 , u2 )
b) r = (u1 + u2 + (u2 )3 , 2u2 )

Solution 6a The parametrization r = (u1 , u2 ) leads to

∂r
= (1, 0)
∂u1 (18)
∂r
= (0, 1)
∂u2
∂r ∂r
It is clear that the vectors ∂u1 and ∂u2 are defined everywhere, linearly independent and
6= 0.

4
Solution 6b The parametrization r = (u1 + u2 + (u2 )3 , 2u2 ) leads to

∂r
= (1, 0)
∂u1 (19)
∂r
= (1 + 3(u2 )2 , 2)
∂u2
∂r ∂r 2 2
Here the vectors ∂u 1 and ∂u2 are defined everywhere. They satisfy 6= 0 because (u ) ≥ 0
∂r
and are linearly independent. Notice that the vector ∂u2 is dependent on the point under
∂r ∂r
consideration. This example shows that the vectors ∂u 1 and ∂u2 may be different from

point to point.

Problem 7 Why is the second requirement (ii) important in the definition of allowable
parametrizations?

Solution 7 The second requirement (ii) guarantees that a vector defined at a point in
∂r ∂r
the Euclidean plane always can be written as a linear combination of ∂u 1 and ∂u2 defined

at the point.

Important! From now on when coordinates are introduced in the Euclidean plane it
is assumed they are allowed parametrizations.

Problem 8 Why is the letter r used to define a point? Why not use the notation
p = (x, y) instead, where the letter p has been chosen since it’s the first letter in the
word point as is done in the book Elementary differential geometry by Barret O’Neill?

Solution 8 This text follows the notational convention used in the book Vektoranalys
by Anders Ramgard. Ramgard does not explain why he chose r to denote a point or
vector in Euclidean n-space. It may be because of the following three reasons:
(i) In Euclidean 2-space where r = (x, y), one can introduce a parametrization (r, φ)
called polar coordinates r = (r cos φ, r sin φ). Notice that the length of the vector
r is |r| = r, where r originates from the word radius (here radius of a circle).

(ii) In Euclidean 3-space where r = (x, y, z), one can introduce a parametrization
(r, φ, θ) called spherical coordinates r = (r sin θ cos φ, r sin θ sin φ, cos θ). The length
of the vector r is |r| = r, where r originates from the radius of the sphere.
(iii) Euclidean n-space is denoted Rn , where the letter R here comes from the word real.

2 First order tensors defined on the Euclidean plane


The following problem leads to the definition of contravariant tensors of 1:st order.

Problem 9 Let a vector v be given by the contravariant components (a1 , a2 ) at a fixed


point P in the Euclidean plane using the coordinates u1 , u2 :
∂r ∂r
v = a1 + a2 2 (20)
∂u1 ∂u

5
Using another coordinate system u1 , u2 , the contravariant components of the vector v
are (a1 , a2 ) and v can be written:
∂r ∂r
v = a1 + a2 2 (21)
∂u1 ∂u
What is the relation between (a1 , a2 ) and (a1 , a2 )? Express (a1 , a2 ) as a function of
(a1 , a2 ).

Solution 9 Using the chain-rule (here α = 1, 2):

∂r ∂r ∂u1 ∂r ∂u2
= + (22)
∂uα ∂u1 ∂uα ∂u2 ∂uα
1 2
∂r ∂r ∂u ∂u
It is allowed to use the chain-rule here since ∂u 1 , ∂u2 , ∂uα and ∂uα are continuous
∂u1 ∂u2
functions. That ∂uα and ∂uα are continuous functions is shown in problem 11. Now
using equations (20), (21) and (22) leads to expressing (a1 , a2 ) as a function of (a1 , a2 )
∂r ∂r
v = a1 + a2 2
∂u1 ∂u
∂r ∂u1 ∂r ∂u2 ∂r ∂u1 ∂r ∂u2
   
= a1 + + a2
+
∂u1 ∂u1 ∂u2 ∂u1 ∂u1 ∂u2 ∂u2 ∂u2
(23)
∂u1 ∂u1 ∂r ∂u2 ∂u2 ∂r
   
= a1 1 + a2 2 1 + a1 1 + a2 2
∂u ∂u ∂u ∂u ∂u ∂u2
∂r ∂r
= a1 1 + a2 2
∂u ∂u
∂r ∂r
Identifying the coefficients in front of ∂u1
and ∂u2
in the last two steps

∂u1 2 ∂u
1
a1 = a1 + a (24)
∂u1 ∂u2
∂u2 ∂u2
a2 = a1 1
+ a2 2 (25)
∂u ∂u
Summarizing (here β = 1, 2):

∂uβ ∂uβ
aβ = a1 1
+ a2 2 (26)
∂u ∂u
Equation (26) is the source of inspiration for tensors (1:st order contravariant tensors).
There are two kinds of tensors of 1:st order:

Definition of a contravariant tensor of 1:st order (or a contravariant vector)


on the Euclidean plane Let a 2-tuple of real numbers (a1 , a2 ) be associated with a
point P in the Euclidean plane with coordinates u1 , u2 . Associate also with the point P
a 2-tuple of real numbers a1 , a2 with respect to the coordinates u1 , u2 . If these numbers
satisfy:
∂uβ ∂uβ
aβ = a1 1 + a2 2 (27)
∂u ∂u
we say that a contravariant vector at P is given. The numbers a1 , a2 and a1 , a2 are called
the components of the contravariant vector in the respective coordinate systems u1 , u2
or u1 , u2 . Contravariant vectors are indicated by a superscript index.

6
Definition of a covariant tensor of 1:st order (or a covariant vector) on the
Euclidean plane Let a 2-tuple of real numbers (b1 , b2 ) be associated with a point P
in the Euclidean plane with coordinates u1 , u2 . Associate also with the point P a 2-tuple
of real numbers b1 , b2 with respect to the coordinates u1 , u2 . If these numbers satisfy:

∂u1 ∂u2
bβ = b1 + b2 (28)
∂uβ ∂uβ
we say that a covariant vector at P is given. The numbers b1 , b2 and b1 , b2 are called the
components of the covariant vector in the respective coordinate systems u1 , u2 or u1 , u2 .
Covariant vectors are indicated by a subscript index.

Problem 10 Determine if the vector (du1 , du2 ) is contravariant, covariant or neither.

Solution 10 The vector (du1 , du2 ) is a contravariant vector since the components du1
and du2 transform contravariantly under change of coordinate system from (u1 , u2 ) to
(u1 , u2 ) accordning to the chain-rule

∂u1 2 ∂u
1
du1 = du1 1 + du (29)
∂u ∂u2
∂u2 2 ∂u
2
du2 = du1 1 + du (30)
∂u ∂u2
Summarizing:
∂uγ 2 ∂u
γ
duγ = du1 + du (γ = 1, 2) (31)
∂u1 ∂u2
That the vector (du1 , du2 ) transforms contravariantly is part of the reason why coordi-
nates are written with superscripts.

Returning to problem 4 posed on page 3 Why are the contravariant and covariant
components so “important” that they have been given names?

Solution When calculating the work performed by a force on a particle under an in-
finitesimal displacement in the Euclidean plane, its natural to assume that the work
performed is independent of the coordinate system used. If the work performed is inde-
pendent of the coordinate system then it turns out that if the displacement is expressed
by its contravariant components then the force must be expressed by its covariant compo-
nents. To see this, assume that a force F is exerted on a particle which moves (du1 , du2 )
in an ordinary Cartesian coordinate system r = (u1 , u2 ). The work done dW is

dW = F1 du1 + F2 du2 (32)

With new coordinates (u1 , u2 ), the force is F = (F1 , F2 ). We would like to write

dW = F1 du1 + F2 du2 (33)

What then is the connection between (F1 , F2 ) and (F1 , F2 )? Using the fact that du1 and
du2 transform contravariantly
∂uγ 2 ∂u
γ
duγ = du1 + du (34)
∂u1 ∂u2

7
dW = F1 du1 + F2 du2
1 1 2 2
   
1 ∂u 2 ∂u 1 ∂u 2 ∂u
= F1 du + du + F2 du + du
∂u1 ∂u2 ∂u1 ∂u2
(35)
∂u1 ∂u2 ∂u1 ∂u2
   
= F1 1 + F2 1 du1 + F1 2 + F2 2 du2
∂u ∂u ∂u ∂u
1 2
= F1 du + F2 du
Identifying:
∂u1 ∂u2
F1 = F1
+ F2 (36)
∂u1 ∂u1
1
∂u ∂u2
F2 = F1 2 + F2 2 (37)
∂u ∂u
Equations (36) and (37) can be summarized by:

∂u1 ∂u2
Fβ = F1 β
+ F2 β (38)
∂u ∂u
Comparing (38) and (28) shows that the components (F1 , F2 ) must transform covariantly
if the work performed by the force is independent of the choice of coordinates used.

Problem 11 A contravariant 1:st order tensor is a 2-tuple of real numbers (a1 , a2 ) that
is transformed under change from coordinates (u1 , u2 ) to (u1 , u2 ) as:

∂uβ ∂uβ
aβ = a1 1
+ a2 2 (39)
∂u ∂u
β β
But is it obvious that ∂u ∂u
∂u1 and ∂u2 always exists? It is sufficient in this problem to
1
investigate if ∂u
∂u1 exists under change of allowable parametrizations (it is not difficult
1
to generalize). If ∂u
∂u1 doesn’t always exists, then a first order tensor is defined by an
expression that is not always defined. How can one define something by something that
is not defined???
1
∂u
Solution 11 The expression ∂u 1 is always well defined under coordinate change between

allowable parametrizations. To see this, use u1 = u1 (x, y) and the chain rule:

∂u1 ∂u1 ∂x ∂u1 ∂y


= + (40)
∂u1 ∂x ∂u1 ∂y ∂u1
∂u1 ∂u1 ∂x ∂u1 ∂y
The expression ∂u1 is defined if
∂x , ∂u1 , ∂y and ∂u1 are defined (it is then also allowed
∂r ∂x(u ,u ) ∂y(u ,u2 )
1 2 1
to use the chain-rule). Since ∂u 1 = ( ∂u1 , ∂u1 ) is well defined according to the
∂x ∂y ∂u1 ∂u1
definition of allowable parametrizations ⇒ ∂u1 and ∂u 1 are defined. But is ∂x and ∂y
∂r ∂r
defined? Since ∂u 1 and ∂u2 are defined, the differentials dx and dy can be written:

∂x 1 ∂x 2
dx = du + du (41)
∂u1 ∂u2
∂y 1 ∂y 2
dy = du + du (42)
∂u1 ∂u2
   ∂x ∂x
  1
dx 1 ∂u2 du
= ∂u (43)
dy ∂y
∂u 1
∂y
∂u 2 du2

8
Using Cramer’s rule:
∂x

dx
∂u2
dy ∂y ∂y ∂x
1 ∂u2 ∂u2 ∂u2
du = ∂x ∂x
= ∂x ∂y ∂y ∂x
dx − ∂x ∂y ∂y ∂x
dy (44)
1
∂u ∂u2 ∂u1 ∂u2
− ∂u1 ∂u2 ∂u1 ∂u2
− ∂u1 ∂u2
∂y1 ∂y
∂u ∂u2

∂u1 ∂u1
If ∂x , ∂y are defined then du1 can be written:

∂u1 ∂u1
du1 = dx + dy (45)
∂x ∂y
Identifying:
∂y
∂u1 ∂u2
= ∂x ∂y ∂y ∂x
(46)
∂x ∂u1 ∂u2
− ∂u1 ∂u2
∂x
∂u1 2
= − ∂x ∂y ∂u ∂y ∂x (47)
∂y ∂u1 ∂u2
− ∂u1 ∂u2
∂x ∂y ∂y ∂x
The denominator ∂u1 ∂u2
− ∂u1 ∂u2
is equal to the determinant
∂x
1 ∂x2

∂u ∂u (48)
∂y1 ∂y
2
∂u ∂u

∂r ∂r
which is equal to ±the area spanned by the vectors ∂u 1 and ∂u2 . And this area is 6= 0
∂r ∂r ∂y ∂x
since ∂u1 and ∂u2 are two linearly independent vectors 6= 0. The nominators ∂u 2 and ∂u2
∂r
are defined since ∂u2 is defined. Thus the right-hand sides of equations (46) and (47) are
1
∂u1
defined, and hence the left-hand sides ∂u
∂x and ∂y are also defined. Thus the expression
∂u1
∂u1 is always defined under coordinate change between allowed parametrizations.

∂r
Problem 12 Determine if the vector ∂u1 is contravariant, covariant or neither.

∂r
Solution 12 Investigating how the vector ∂u 1 transforms when coordinates are changed
α β
from u to u .  
∂r ∂x ∂y
= , (49)
∂u1 ∂u1 ∂u1
Using the chain-rule
∂x ∂x ∂u1 ∂x ∂u2
1 = ∂u1 1 + ∂u2 (50)
∂u ∂u ∂u1
∂y ∂y ∂u1 ∂y ∂u2
1 = ∂u1 1 + ∂u2 (51)
∂u ∂u ∂u1
Thus
∂x ∂u1 ∂x ∂u2 ∂y ∂u1 ∂y ∂u2
 
∂r
= + , +
∂u1 ∂u1 ∂u1 ∂u2 ∂u1 ∂u1 ∂u1 ∂u2 ∂u1
  1   2
∂x ∂y ∂u ∂x ∂y ∂u
= , 1 1 + , (52)
1
∂u ∂u ∂u ∂u2 ∂u2 ∂u1
∂r ∂u1 ∂r ∂u2
= 1 +
1
∂u ∂u ∂u2 ∂u1

9
∂r ∂r ∂u1 ∂r ∂u2
β
= β
+ (53)
∂u 1
∂u ∂u ∂u2 ∂uβ
∂r
This means that ∂u β does not define a contravariant nor covariant vector, but it is
∂r ∂r
the “vector” ( ∂u1 , ∂u2 ) that transforms covariantly. Covarians is usually denoted by a
subscript, which inspires the following notation that is frequently used
∂r
rα ≡ (54)
∂uα
∂r
rα ≡ (55)
∂uα
Using this notation, it’s possible to write

∂u1 ∂u2
rβ = r1 + r2 (56)
∂uβ ∂uβ

Problem 13 Do the covariant components bβ of a vector v (eq. (2) defined on page


1) with respect to the coordinate axes r1 and r2 transform covariantly according to the
definition of covariant 1:st order tensors?

bβ = v · rβ (57)

Solution 13 Yes, they do.

bβ = v · r β
∂u1 ∂u2
 
= v · r1 β + r2 β
∂u ∂u
∂u 1
∂u2 (58)
= (v · r1 ) β + (v · r2 ) β
∂u ∂u
∂u1 ∂u2
= b1 β + b2 β
∂u ∂u

3 First order tensors defined on Euclidean n-space


3.1 The generalization
Mathematicians like to generalize things. How can 1:st order tensors defined on the
Euclidean plane be generalized? Probably the first obvious generalization is to generalize
from to the Euclidean plane to Euclidean n-space.

Definition of Euclidean n-space Let n be a positive integer. Euclidean n-space is


defined as the set of all ordered n-tuples r = (p1 , . . . , pn ) called points where p1 , . . . , pn
are real numbers (the index k of pk does not imply covarians, but is only an index).

Definition of an allowable parametrization of Euclidean n-space An allowable


parametrization of Euclidean n-space satisfies:
i) Every point in Euclidean n-space is given unique coordinates (u1 , . . . , un ) where
(u1 , . . . , un ∈ R)

r(u1 , . . . , un ) = (p1 (u1 , . . . , un ), . . . , pn (u1 , . . . , un )) (59)

10
∂r ∂r
ii) The vectors ∂u1 , . . . , ∂un are defined everywhere, linearly independent and 6= 0.
As before, when coordinates are introduced it is assumed they are allowed parametriza-
tions. Generalizing problem 9:

Problem 14 Let a n-dimensional vector v be given by the contravariant components


(a1 , . . . , an ) at a fixed point P in Euclidean n-space using the coordinates u1 , . . . , un :

∂r ∂r
v = a1 + · · · + an n (60)
∂u1 ∂u
Using another coordinate system u1 , . . . , un , the contravariant components of the vector
v are (a1 , . . . , an ) and v can be written:

∂r n ∂r
v = a1 1 + · · · + a ∂un (61)
∂u
What is the relation between (a1 , a2 ) and (a1 , a2 )? Express (a1 , a2 ) as a function of
(a1 , a2 ).

Solution 14 Using a similar argument as in problem 9

∂uβ ∂uβ
aβ = a1 1
+ · · · + an n (62)
∂u ∂u
Therefore the generalization of 1:st order tensors to be defined on Euclidean n-space
becomes:

Definition of a contravariant tensor of 1:st order (or a contravariant vector)


on Euclidean n-space A contravariant tensor of 1:st order is a quantity whose n
components aα are transformed according to

∂uβ n ∂u
β
aβ = a1 + . . . + a (63)
∂u1 ∂un
under change of coordinate system in Euclidean n-space.

Definition of a covariant tensor of 1:st order (or a covariant vector) on Eu-


clidean n-space A covariant tensor of 1:st order is a quantity whose n components bβ
are transformed according to

∂u1 ∂un
bβ = b1 β
+ . . . + bn β (64)
∂u ∂u
under change of coordinate system in Euclidean n-space.

3.2 The Einstein summation convention


Here it is convenient to introduce the Einstein summation convention that is used ex-
tensively in tensor analysis1 and relativity theory. The point of the Einstein summation
1 The term tensor analysis was coined by Einstein in 1916.

11
convention is to simplify notation. Consider the definition of 1:st order contravariant
tensors. The notation may first be simplified by the introduction of a summation sign:

∂uβ n ∂u
β
aβ = a1 + . . . + a
∂u1 ∂un
n
X ∂u β (65)
= aα α
α=1
∂u

This sum is written in the Einstein notation as:


n
X ∂uβ ∂uβ
aα α
= aα α (66)
α=1
∂u ∂u

∂uβ
In the expression ∂uα the index β is considered superscript and the index α is considered
subscript.

SUMMATION CONVENTION If in a product a letter figures twice, once as su-


perscript and once as subscript, summation
P must be carried out from 1 to n with respect
to this letter. The summation sign will be omitted.

For example
n
X
aα bα = aα bα = a1 b1 + a2 b2 + · · · + an bn (67)
α=1

Applying the Einstein summation to the definition of 1:st order covariant tensors:

∂u1 ∂un
bβ = b1 + . . . + b n
∂uβ ∂uβ
n γ
X ∂u
= bγ β (68)
γ=1
∂u
∂uγ
= bγ
∂uβ
Equation (20) on page 5 can be written

v = aβ rβ (69)

Rewriting equation (56) with the Einstein summation notation (and of course generalized
to Euclidean n-space)
∂uα
rβ = rα β (70)
∂u
An index with respect to which summation must be carried out is called a summation
index or dummy index. The other indices are said to be free indices. Dummy indices
may be changed during computations without warning, for example

aα bα = ai bi = a1 b1 + a2 b2 + · · · + an bn (71)

aα bα + aγ cγ = aα (bα + cα ) (72)

Problem 15 Why are the coordinates (u1 , u2 , . . . , un ) written with superscripts?

12
Solution 15 One could be led to believe that the vector (u1 , u2 , . . . , un ) transforms
contravariantly under coordinate changes because the indices of the coordinate variables
are written with superscripts. This is not true. It is the vector (du1 , du2 , . . . , dun )
α
that transforms contravariantly. In expressions duα , ∂u , ∂r the index α is considered
∂uβ ∂uβ
superscript and β subscript. This convention together with the Einstein summation
convention simplifies the tensor notation.

4 Higher order tensors


Now that first order tensors are generalized to be defined on Euclidean n-space, can
the tensor concept be generalized further? Yes, it can and the way first order tensors
are generalized is inspired by the definition of work in physics. The following problems
illustrate this.

Problem 16 Let aα be the contravariant components of an arbitrary vector v which


∂uγ
means aγ = aα ∂uα . Consider the expression

I = bα aα (73)

Show that if I is invariant under coordinate change, then bα are the covariant components
of a vector w which means bα transforms covariantly.

Solution 16 Let aγ , bγ be the components with respect to the coordinate system uα .


Since I is invariant
I = bγ aγ = bα aα (74)
∂uγ α
Inserting aγ = ∂uα a
∂uγ α
a = bα aα
bγ (75)
∂uα
This expression is valid for an arbitrary aα . Thus it’s possible to identify
∂uγ
bγ = bα (76)
∂uα
∂uα
Multiply both sides with ∂uβ
and sum with respect to α from 1 to n

∂uγ ∂uα ∂uα


bγ = b α (77)
∂uα ∂uβ ∂uβ
Here is now used
∂uγ ∂uα
= δβγ (78)
∂uα ∂uβ
where δβγ is the Kronecker delta which is = 1 when β = γ and = 0 otherwise. Equation
(78) follows from the fact that the variables u1 , u2 , . . . , un are independent and the chain-
rule
∂uβ
δβγ =
∂uγ
∂uβ ∂u1 ∂uβ ∂u2 ∂uβ ∂un (79)
= + + · · · + (no summation over n)
∂u1 ∂uγ ∂u2 ∂uγ ∂un ∂uγ
∂uγ ∂uα
=
∂uα ∂uβ

13
This means
∂uα
bγ δβγ = bα (80)
∂uβ
∂uα
bβ = bα β (81)
∂u
Thus bα transforms covariantly. This problem can be generalized, for example to define
2:nd order covariant tensors as in problem 17 and 2:nd order contravariant tensors as in
problem 18.

Problem 17 Let bα and cβ be two arbitrary contravariant 1:st order tensors. Consider

I = aαβ bα cβ (82)

If I is invariant under coordinate change, determine the transformation behaviour of aαβ .

Solution 17 Since I is invariant


ω
I = aωλ b cλ = aαβ bα cβ (83)
ω ω λ
Inserting b = bα ∂u λ β ∂u
∂uα and c = c ∂uβ

∂uω ∂uλ
  
I = aωλ bα α cβ β = aαβ bα cβ (84)
∂u ∂u

Since bα and cβ are arbitrary it’s possible to identify

∂uω ∂uλ
aαβ = aωλ (85)
∂uα ∂uβ
∂uα ∂uβ
Multiply both sides by ∂uµ ∂uν and sum with respect to both α and β from 1 to n

∂uα ∂uβ ∂uω ∂uα ∂uλ ∂uβ


aαβ µ ν = aωλ = aωλ δµω δνλ (86)
∂u ∂u ∂uα ∂uµ ∂uβ ∂uν
∂uα ∂uβ
aµν = aαβ (87)
∂uµ ∂uν
If I is invariant under coordinate change then aαβ defines a 2:nd order covariant tensor.

Definition of a covariant tensor of 2:nd order defined on Euclidean n-space


A covariant tensor of order 2 is a quantity whose n2 components aαβ are transformed
according to
∂uα ∂uβ
aµν = aαβ µ ν (88)
∂u ∂u
under change of coordinate system in Euclidean n-space.

Problem 18 Let bα and cβ be two arbitrary covariant 1:st order tensors. Consider

I = aαβ bα cβ (89)

If I is invariant under coordinate change, determine the transformation behaviour of aαβ .

14
Solution 18 Since I is invariant

I = aωλ bω cλ = aαβ bα cβ (90)


α β
∂u ∂u
Inserting bω = bα ∂u ω and cλ = cβ
∂uλ

∂uα ∂uβ
  
ωλ
I=a bα ω cβ λ = aαβ bα cβ (91)
∂u ∂u
Since bα and cβ are arbitrary it’s possible to identify

∂uα ∂uβ
aαβ = aωλ (92)
∂uω ∂uλ
∂uµ ∂uν
Multiply both sides by ∂uα ∂uβ and sum with respect to both α and β from 1 to n

∂uµ ∂uν ∂uα ∂uµ ∂uβ ∂uν


aαβ α β
= aωλ ω α λ β = aωλ δωµ δλν (93)
∂u ∂u ∂u ∂u ∂u ∂u
∂uµ ∂uν
aµν = aαβ (94)
∂uα ∂uβ
If I is invariant under coordinate change then aαβ defines a 2:nd order contravariant
tensor.

Definition of a contravariant tensor of 2:nd order defined on Euclidean n-space


A contravariant tensor of order 2 is a quantity whose n2 components aαβ are transformed
according to
∂uµ ∂uν
aµν = aαβ α β (95)
∂u ∂u
under change of coordinate system in Euclidean n-space.

Now, there is a third kind of tensor, the mixed 2:nd order tensor.

Problem 19 Let bα be an arbitrary covariant 1:st order tensor and cβ an arbitrary


contravariant 1:st order tensor. Consider

I = aαβ bα cβ (96)

If I is invariant under coordinate change, determine the transformation behaviour of aαβ .

Important! Notice that aαβ is not written as aβα . This is because in tensor notation
it’s not only if the index is superscript or subscript that matters, but the position is also
important. This has to do with the possibility of associating a new tensor with a given
one through the metric tensor by raising or lowering an index and is explained later in
this text.

Solution 19 Since I is invariant


ω
I = aωλ b cλ = aαβ bα cβ (97)
ω ω β
Inserting b = bα ∂u ∂u
∂uα and cλ = cβ ∂uλ

ω
∂uβ
  
λ α ∂u
I = aω b cβ λ = aαβ bα cβ (98)
∂uα ∂u

15
Since bα and cβ are arbitrary it’s possible to identify
∂uω ∂uβ
aαβ = aωλ (99)
∂uα ∂uλ
∂uα ∂uν
Multiply both sides by ∂uµ ∂uβ and sum with respect to both α and β from 1 to n
∂uα ∂uν ∂uω ∂uα ∂uβ ∂uν
aαβ µ β
= aωλ α µ λ β = aωλ δµω δλν (100)
∂u ∂u ∂u ∂u ∂u ∂u
∂uα ∂uν
aµν = aαβ (101)
∂uµ ∂uβ
If I is invariant under coordinate change then aαβ defines a 2:nd order mixed tensor. It
is covariant to 1:st order and contravariant to 1:st order.

Definition of a mixed tensor of 2:nd order defined on Euclidean n-space A


mixed tensor of order 2, contravariant to order 1 and covariant to order 1, is a quantity
whose n2 components aαβ are transformed according to
∂uα ∂uν
aµν = aαβ (102)
∂uµ ∂uβ
under change of coordinate system in Euclidean n-space.

Continuing in the same way, to define a general order tensor, consider


I = cν1 ν2 ···νs α1 α2 ···αr aν1 aν2 · · · aνn bα1 bα2 · · · bαn (103)
If I is invariant under coordinate change, then cν1 ν2 ···νs α1 α2 ···αr defines a mixed tensor,
contravariant to order r and covariant to order s, and transforms as
∂uν1 ∂uν2 ∂uνs ∂uβ1 ∂uβ2 ∂uβr
cµ1 µ2 ···µs β1 β2 ···βr = cν1 ν2 ···νs α1 α2 ···αr · · · · · · (104)
∂uµ1 ∂uµ2 ∂uµs ∂uα1 ∂uα2 ∂uαr

Definition of a general order mixed tensor defined on Euclidean n-space A


mixed tensor, contravariant to order r and covariant to order s, is a quantity whose nr+s
components cν1 ν2 ···νs α1 α2 ···αr are transformed according to
∂uν1 ∂uν2 ∂uνs ∂uβ1 ∂uβ2 ∂uβr
cµ1 µ2 ···µs β1 β2 ···βr = cν1 ν2 ···νs α1 α2 ···αr µ1 µ2 · · · µs · · · (105)
∂u ∂u ∂u ∂uα1 ∂uα2 ∂uαr
Problem 20 What’s the definition of a zero order tensor?

Solution 20 A zero order tensor is defined as a scalar. Why is this so? For example,
a 2:nd order tensor has n2 components, a 1:st order tensor has n1 components, and thus
a zero order tensor should have n0 = 1 components.

Now that the tensor concept has been introduced, the rest of this text is devoted to
explaining why tensors are written cν1 ν2 ···νs α1 α2 ···αr and not cνα11να22···ν
···αr
s
.

4.1 The connection with vectors and matrices


Here it is convenient to give a first motivation to why a 2:nd order mixed tensor written
as aαβ , instead of aβα . When 1:st order tensors are viewed as vectors and 2:nd order
tensors are viewed as matrices, the convention is to let the first index α in aαβ denote
the row number and the second index β denote the column number in the matrix.

16
CONVENTION First order tensor: The index denotes the row in a column vector.
Second order tensor: The first index denotes the row and the second index denotes the
column in a matrix.

For example if aαβ is defined on the Euclidean plane and a11 = 1, a12 = 2, a21 = 3,
a22 = 4, then this can be written in matrix notation
 
1 2
(aαβ ) = (106)
3 4

If bβ is defined on Euclidean 4-space and b1 = 1, b2 = 3, b3 = 5, b4 = 7 then this is


written as a column matrix  
1
3
(bβ ) =  
5 (107)
7
If the tensor notation would be aβα where α and β are right on top of each other, then
with the above convention it would not be possible to decide which index denotes the
row and column in the matrix. Maybe one could choose the convention so that the lower
index always denotes the row and the upper index always denotes the column. But then,
which index denotes the row in the tensor aαβ ?

4.2 The Kronecker delta and symmetric tensors


In equation (78) the Kronecker delta δµν was only introduced as a symbol that takes the
value 1 when µ = ν and 0 otherwise. But the Kronecker delta is more than a symbol. In
a given coordinate system uα it is defined as
∂uν
δµν = (108)
∂uµ
and is a mixed second order tensor, contravariant to order 1 and covariant to order 1. If
the lower index µ denotes the row and ν the column then δµν can be written
 
1 0 ··· 0
0 1 ··· 0
(δµν ) =  . . (109)
 
.. .. 
 .. .. . .
0 0 ··· 1

Notice that the indices over the Kronecker delta tensor is written on top of each other
and not δµ ν nor δ νµ . This is because one can change places of the indices δµν = δνµ .
A tensor with this property is called a symmetric tensor and it does not matter which
index denotes the row and which index denotes the column. Since it is a mixed tensor it
transforms as
ν ∂uα ∂uµ
δ µ = δαβ
∂uν ∂uβ
∂uα ∂uµ
=
∂uν ∂uα (110)
∂uµ
=
∂uν
= δνµ

17
Thus the Kronecker delta tensor has the same components in all coordinate systems.
In Cartesian coordinate systems where there is no difference between contravariant and
covariant components it is usually written with both indices subscript δµν .

4.3 Matrix multiplication and contraction


When doing calculations with 1:st and 2:nd order tensors, it sometimes simplifies cal-
culations to take advantage of matrix multiplication. The following problem illustrates
this.

Problem 21 Let Aαβ and B αβ be two 2:nd order tensors given by


 
1 2
(Aαβ ) = (111)
3 4
 
4 3
(B αβ ) = (112)
2 1
Calculate Aαγ B γβ !

Solution 21 One way of solving this problem is to write out what the expression
Aαγ B γβ is short hand notation for and calculate each case individually:

A11 B 11 + A12 B 21 (α = 1, β = 1)
12 22
A11 B + A12 B (α = 1, β = 2)
11 21
(113)
A21 B + A22 B (α = 2, β = 1)
A21 B 12 + A22 B 22 (α = 2, β = 2)

Here is an alternative solution is given. In this problem it’s advantegous to make use of
matrix multiplication. Assume there is a tensor Cαβ = Aαγ B γβ , where the indices α and
β in Cαβ are written right on top of each other because it’s not yet decided which index
denotes the row and column. Writing out what the expression Cαβ = Aαγ B γβ is short
hand notation for
C11 = A11 B 11 + A12 B 21
C12 = A11 B 12 + A12 B 22
(114)
C21 = A21 B 11 + A22 B 21
C22 = A21 B 12 + A22 B 22

Rearranging

C11 = A11 B 11 + A12 B 21


C21 = A21 B 11 + A22 B 21
(115)
C12 = A11 B 12 + A12 B 22
C22 = A21 B 12 + A22 B 22

This can be written


C11
     11 
A11 A12 B
= (116)
C21 A21 A22 B 21

18
C12
     12 
A11 A12 B
= (117)
C22 A21 A22 B 22
Or even more compact
 1
C12
  11
B 12
  
C1 A11 A12 B
= (118)
C21 C22 A21 A22 B 21 B 22

Notice that in the matrix with the A:s and B:s, the α denotes the row and β the column
in Aαβ and B αβ . Considering the matrix with the C:s, we let α denote the row and β
the column in Cαβ and writes this from now on Cα β . Thus
 1
C1 2
  11
B 12
  
C1 A11 A12 B
= (119)
C2 1 C2 2 A21 A22 B 21 B 22

Or in tensor notation Cα β = Aαγ B γβ . Returning now to the original problem, where the
solution is given by
  
β 1 2 4 3
(Cα ) =
3 4 2 1
  (120)
8 5
=
20 13

Notice that the method used in this problem can be generalized. If instead
 
A11 A12 · · · A1n
 A21 A22 · · · A2n 
(Aαβ ) =  . (121)
 
.. .. .. 
 .. . . . 
An1 An2 · · · Ann

B 11 B 12 · · · B 1n
 
 B 21 B 22 · · · B 2n 
αβ
(B ) =  . (122)
 
.. .. .. 
 .. . . . 
B n1 B n2 · · · B nn
then Cα β = Aαγ B γβ is given by
  11
B 12 B 1n
 
A11 A12 ··· A1n B ···
 A21 A22 ··· A2n   B 21 B 22 ··· B 2n 
(Cα β ) =  . (123)
  
.. .. ..   .. .. .. .. 
 .. . . .  . . . . 
An1 An2 ··· Ann B n1 B n2 ··· B nn

If we put two indices, a contravariant and a covariant, equal to each other in a mixed
tensor and sum with respect to this pair of indices we obtain a tensor of order one less
contravariant and one less covariant. This process is called contraction of the given
tensor. Contraction of a mixed tensor of second order for example aαβ gives a scalar

aαα = a11 + a22 + · · · + ann (124)

Rule to memorize If two 2:nd order tensors are multiplied with each other (for exam-
ple Aαγ B γβ or Aαγ B γβ ) and the column index of the tensor with matrix representation
A is contracted with the row index of the tensor with matrix representation B then the
resulting tensor can be calculated by matrix multiplication AB.

19
4.4 The metric tensor
Problem 22 Given the contravariant components aβ and the vectors rβ , determine the
covariant components aα .

Solution 22 Using the definitions of contravariant and covariant components, equa-


tions (2) and (56) on pages 1 and 10, aα = rα · v and v = aβ rβ
aα = rα · v = rα · aβ rβ = rα · rβ aβ (125)
Here it is convenient to introduce the metric tensor gαβ which is defined as:
gαβ = rα · rβ (126)
Thus
aα = gαβ aβ (127)

Problem 23 Show that the metric tensor gαβ is a 2:nd order covariant tensor.

Solution 23 According to the definition of the metric tensor gαβ


g µν = rµ · rν
∂uα ∂uβ
   
= rα µ · rβ ν
∂u ∂u
∂uα ∂uβ (128)
= rα · rβ
∂uµ ∂uν
∂u ∂uβ
α
= gαβ µ ν
∂u ∂u
which shows that the metric tensor gαβ is indeed a 2:nd order covariant tensor.

Problem 24 In problem 22 the covariant and contravariant components are related


through the metric tensor as aα = gαβ aβ . When memorizing this formula, how shall one
remember that it is aα = gαβ aβ and not aα = gβα aβ ?

Solution 24 The metric tensor is by definition a symmetric tensor gαβ = gβα and thus
both expressions are correct.

Problem 25 Given the covariant components aβ and the vectors rβ , determine the
contravariant components aα .

Solution 25 According to problem 22 the connection between contravariant and co-


variant components is given by aα = gαβ aβ . Assuming we’re working on Euclidean
2-space and since the column index β in gαβ is contracted with the row index β in aβ
then aα = gαβ aβ can be written in matrix notation as
     1
a1 g g12 a
= 11 (129)
a2 g21 g22 a2
Problem 26 shows that the matrix (gαβ ) is invertible, and thus it’s possible to multiply
both sides with the inverse matrix (gαβ )−1 which gives
 1  −1  
a g11 g12 a1
= (130)
a2 g21 g22 a2

20
There is a tensor g αβ which is defined as the inverse of the metric tensor:

(g αβ ) = (gαβ )−1 (131)

Here
(g αβ ) = (gαβ )−1
 −1
g11 g12
= (132)
g21 g22
 
1 g22 −g12
=
g11 g22 − g21 g12 −g21 g11
In the Differential geometry litterature one often finds the notation g for the determinant
of the matrix (gαβ ). Here it means that g = g11 g22 − g21 g12 and one can write
 
αβ 1 g22 −g12
(g ) = (133)
g −g21 g11
g22
which means that g 11 = g , g 12 = − gg12 , g 21 = − gg21 , g 22 = g11
g . With this notation it’s
possible to write  1   11
g 12
 
a g a1
= (134)
a2 g 21 g 22 a2
Writing this in tensor notation
aα = g αβ aβ (135)
Equation (135) is not only valid in Euclidean 2-space but also in Euclidean n-space.

Problem 26 On Euclidean n-space the vectors r1 , r2 , . . . , rn are linearly independent


vectors. Show that the n × n-matrix (gαβ ) is invertible
 
g11 g12 · · · g1n
 g21 g22 · · · g2n 
(gαβ ) =  . (136)
 
.. .. .. 
 .. . . . 
gn1 gn2 ··· gnn
where gαβ = rα · rβ .

Solution 26 The proof is divided into two steps. The first step proofs the assertion
when gαβ is defined on Euclidean 2-space. The second step proofs the assertion when
gαβ is defined on Euclidean n-space.

Step 1: When defined on the Euclidean plane the metric 4 components g11 , g12 , g21 ,
g22 and this is written in matrix notation as:
 
g11 g12
(gαβ ) = (137)
g21 g22

The matrix (gαβ ) is invertible if and only if it’s column vectors are linearly independent.
Assuming that they are linearly dependent will lead to a contradiction. If they are linearly
dependent, this means there is a real number k 6= 0 such that
   
g11 g
= k 12 (138)
g21 g22

21

Notice that g12 = r1 · r2 = g21 . Notice also that g12 < g11 g22 . That this is so can be
seen from the definition of scalar product.
g12 = r1 · r2
(139)
= |r1 ||r2 | cos θ

where 0 ≤ θ < π is the angle between the two vectors r1 and r2 . Since r1 and r2 are
linearly independent the angle θ 6= 0 which means that cos θ < 1 and
g12 < |r1 ||r2 |
p
= |r1 |2 |r2 |2 (140)

= g11 g22
Now using equations

g12 < g11 g22
r g 
21
= (kg12 )
k (141)

= g12 g21
= g12

Thus we arrive at the conclusion g12 < g12 which is obviously not true. Thus the two
column vectors in the matrix (gαβ ) are linearly independent and the invertibilty of (gαβ )
follows.

Step 2: Now to prove the general case when gαβ is defined on Euclidean n-space. If
two column vectors in the matrix (gαβ ) are linearly dependent there is a real number
k 6= 0 such that (here γ is an integer satisfying 1 ≤ γ ≤ n)
   
Row 1 g1α g1β
..  ..   .. 
.  . 
 
 . 
 
Row γ gγα  = k gγβ 
    (142)
..  .   . 
.  ..   .. 
Row n gnα gnβ

for arbitrary integers α and β 6= α such that 1 ≤ α, β ≤ n. Row α and row β means

gαα = kgαβ (143)

gβα = kgββ (144)



Using gαβ = gβα and gαβ < gαα gββ

gαβ < gαα gββ
r g 
βα
= (kgαβ )
k (145)

= gαβ gβα
= gαβ

Thus we arrive at the contradiction gαβ < gαβ , which means that the column vectors in
the matrix (gαβ ) must be linearly independent and the matrix (gαβ ) is therefore invertible.

22
Problem 27 Given the contravariant components aα of a vector v and the vectors rα ,
the vector v can be written
v = aα rα (146)
But if the covariant components aβ of the vector v were given instead of the contravariant
components aα , the vector v is also specified and therefore there should exist vectors rβ
such that it’s possible to write
v = aβ rβ (147)
Determine the vectors rβ !

Solution 27 Since the vectors rβ are given, the metric tensor gαβ = rα ·rβ is also given.
This in turn implies that the tensor (g αβ ) = (gαβ )−1 also is given. The relationship
between the contravariant components aα and the covariant components aβ of a vector
is given by
aα = g αβ aβ (148)
Assume that the contravariant components of vector rγ is written aαγ and the covariant
components of this vector is written aβ γ . This means

aαγ = g αβ aβ γ (149)

Since aαγ are the contravariant components it’s possible to write

rγ = aαγ rα (150)

But
rγ = δβγ rβ (151)
Thus aβ γ = δβγ

aαγ = g αβ aβ γ
= g αβ δβγ (152)
= g αγ

rγ = g αγ rα (153)

Problem 28 Since
gαβ = rα · rβ (154)
one could suspect that
g αβ = rα · rβ (155)
Show that this is so!

Solution 28 Starting with the expression rα · rβ

rα · rβ = g αµ rµ · g βν rν
(156)
= g αµ g βν gµν

23
Here we shall use g αµ gµν = δνα , which is a relationship between g αβ and gαβ that is used
a lot in tensor calculations. This follows from (g αβ ) = (gαβ )−1 , which means that
 11
g 12 · · · g 1n
   
g g11 g12 · · · g1n 1 0 ··· 0
 g 21 g 22 · · · g 2n   g21 g22 · · · g2n  0 1 · · · 0
..  =  .. .. (157)
    
 .. .. .. ..   .. .. .. .. .. 
 . . . .   . . . .   . . . .
g n1 g n2 ··· g nn gn1 gn2 ··· gnn 0 0 ··· 1
Continuing
rα · rβ = g αµ gµν g βν
= δνα g βν (158)
= g βα

Since (gαβ ) is a symmetric matrix, the inverse matrix (g αβ ) is also a symmetric matrix
which means g αβ = g βα .

Problem 29 The vectors rβ are linearly independent. A natural question thus arises,
are the vectors rγ linearly independent?

Solution 29 Yes, they are linearly independent.

4.5 Raising and lowering an index


We have seen that
aα = g αβ aβ rα = g αβ rβ (159)
β β
aα = gαβ a rα = gαβ r (160)
Here we show that generally aµν 6= aµν

aµν = gµα aαν


= gµα g νβ aαβ (161)
(aαβ )(g νβ )

= (gµα )
 
1 1
(gαβ ) = (162)
1 2
 
2 −1
(g αβ ) = (163)
−1 1
 
α 1 2
(a β ) = (164)
3 4
  
αβ 1 2 2 −1
(a ) =
3 4 −1 1
  (165)
0 1
=
2 1
  
1 1 0 1
(aµν ) =
1 2 2 1
  (166)
2 2
=
4 3

24
4.6 Summary
Definition of a tensor of arbitrary order on Euclidean n-space
Let r, s and n be integers ≥ 1.

• A zero order tensor is defined as a scalar.


• A contravariant tensor of order r is a quantity whose nr components aα1 α2 ···αr are
transformed according to

∂uβ1 ∂uβ2 ∂uβr


aβ1 β2 ···βr = aα1 α2 ···αr · · · (167)
∂uα1 ∂uα2 ∂uαr
under change of coordinate system in Euclidean n-space.
• A covariant tensor of order s is a quantity whose ns components bν1 ν2 ···νs are
transformed according to
∂uν1 ∂uν2 ∂uνs
bµ1 µ2 ···µs = bν1 ν2 ···νs · · · (168)
∂uµ1 ∂uµ2 ∂uµs
under change of coordinate system in Euclidean n-space.
• A mixed tensor, contravariant to order r and covariant to order s, is a quantity
whose nr+s components cν1 ν2 ···νs α1 α2 ···αr are transformed according to

∂uν1 ∂uν2 ∂uνs ∂uβ1 ∂uβ2 ∂uβr


cµ1 µ2 ···µs β1 β2 ···βr = cν1 ν2 ···νs α1 α2 ···αr µ1 µ2 · · · µs α α
· · · (169)
∂u ∂u ∂u ∂u 1 ∂u 2 ∂uαr

25

Potrebbero piacerti anche