Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
,m
x
m
; a
i,j
a
i
x
j
; C
ij,kl
C
ij
x
k
x
l
; etc. (A.1)
If i remains a free index, dierentiation of a tensor with respect to i produces a tensor of order
one higher. For example
A
j,i
=
A
j
x
i
(A.2)
If i is a dummy index, dierentiation of a tensor with respect to i produces a tensor of order
one lower. For example
V
m,m
=
V
m
x
m
=
V
1
x
1
+
V
2
x
2
+ +
V
N
x
N
(A.3)
A.4. Coordinate Systems 5
A.4 Coordinate Systems
The denition of geometric shapes of bodies is facilitated by the use of a coordinate system. With re-
spect to a particular coordinate system, a vector may be dened by specifying the scalar components
of that vector in that system.
A rectangular Cartesian coordinate (RCC) system is represented by three mutually perpendicular
axes in the manner shown in Figure A.1.
x
1
x
2
x
3
e
1
e
3
e
2
^
^
^
Figure A.1: Rectangular Cartesian Coordinate System
Any vector in the RCC system may be expressed as a linear combination of three arbitrary,
nonzero, non-coplanar vectors called the base vectors. Base vectors are, by hypothesis, linearly
independent. A set of base vectors in a given coordinate system is said to constitute a basis for that
system. The most frequent choice of base vectors for the RCC system is the set of unit vectors e
1
,
e
2
, e
3
, directed parallel to the x
1
, x
2
and x
3
coordinate axes, respectively.
.
Remark
1. The summation convention is very often employed in connection with the representation of
vectors and tensors by indexed base vectors written in symbolic notation. In Euclidean space any
vector is completely specied by its three components. The range on indices is thus 3 (i.e., N = 3).
A point with coordinates (q
1
, q
2
, q
3
) is thus located by a position vector x, where
x = q
1
e
1
+ q
2
e
2
+ q
3
e
3
(A.4)
In abbreviated form this is written as
x = q
i
e
i
(A.5)
where i is a summed index (i.e., the summation convention applies even though there is no repeated
index on the same kernal).
.
The base vectors constitute a right-handed unit vector triad or right orthogonal triad that satisfy
the following relations:
e
i
e
j
=
ij
(A.6)
and
6 A. Brief Review of Tensors
e
i
e
j
=
ijk
e
k
(A.7)
The set of base vectors satisfying the above conditions are often called an orthonormal basis.
In equation (A.6),
ij
denotes the Kronecker delta (a second-order tensor typically denoted by
I), dened by
ij
=
_
1 if i = j
0 if i = j
(A.8)
In equation (A.7),
ijk
is the permutation symbol or alternating tensor (a third-order tensor),
that is dened in the following manner:
ijk
=
_
_
1 if i, j, k are an even permutation of 1,2,3
1 if i, j, k are an odd permutation of 1,2,3
0 if i, j, k are not a permutation of 1,2,3
(A.9)
An even permutation of 1,2,3 is 1, 2, 3, 1, 2, 3, ; an odd permutation of 1,2,3 is 3, 2, 1, 3, 2, 1, .
The indices fail to be a permutation if two or more of them have the same value.
.
Remarks
1. The Kronecker delta is sometimes called the substitution operator since, for example,
ij
b
j
=
i1
b
1
+
i2
b
2
+
i3
b
3
= b
i
(A.10)
ij
C
jk
= C
ik
(A.11)
and so on. From its denition we note that
ii
= 3.
2. In light of the above discussion, the scalar or dot product of two vectors a and b is written as
a b = (a
i
e
i
) (b
j
e
j
) = a
i
b
j
ij
= a
i
b
i
(A.12)
In the special case when a = b,
a a = a
k
a
k
= (a
1
)
2
+ (a
2
)
2
+ (a
3
)
2
(A.13)
The magnitude of a vector is thus computed as
|a| = (a a)
1
2
= (a
k
a
k
)
1
2
(A.14)
3. The vector or cross product of two vectors a and b is written as
a b = (a
i
e
i
) (b
j
e
j
) = a
i
b
j
ijk
e
k
(A.15)
4. The determinant of a square matrix A is
A.4. Coordinate Systems 7
det A = |A| =
A
11
A
12
A
13
A
21
A
22
A
23
A
31
A
32
A
33
= A
11
A
22
A
33
+ A
12
A
23
A
31
+ A
13
A
21
A
32
A
31
A
22
A
13
A
32
A
23
A
11
A
33
A
21
A
12
=
ijk
A
1i
A
2j
A
3k
(A.16)
5. It can also be shown that
ijk
det A =
A
i1
A
i2
A
i3
A
j1
A
j2
A
j3
A
k1
A
k2
A
k3
A
1i
A
1j
A
1k
A
2i
A
2j
A
2k
A
3i
A
3j
A
3k
ijk
rst
det A =
A
ir
A
is
A
it
A
jr
A
js
A
jt
A
kr
A
ks
A
kt
ijk
rst
=
ir
is
it
jr
js
jt
kr
ks
kt
ijk
ist
=
js
kt
jt
ks
ijk
ijr
= 2
kr
ijk
ijk
= 6
.
8 A. Brief Review of Tensors
A.5 Transformation Laws for Cartesian Tensors
Dene a point P in space referred to two rectangular Cartesian coordinate systems. The base vectors
for one coordinate system are unprimed, while for the second one they are primed. The origins of
both coordinate systems are assumed to coincide. The position vector to this point is given by
x = x
i
e
i
= x
j
e
j
(A.17)
To obtain a relation between the two coordinate systems, form the scalar product of the above
equation with either set of base vectors; viz.,
e
k
(x
i
e
i
) = e
k
_
x
j
e
j
_
(A.18)
Upon expansion,
x
i
(e
k
e
i
) = x
kj
= x
k
(A.19)
Since e
i
and e
k
are unit vectors, it follows from the denition of the scalar product that
e
k
e
i
= (1)(1) cos (e
k
, e
i
) R
ki
(A.20)
The R
ki
are computed by taking (pairwise) the cosines of angles between the x
k
and x
i
axes.
For a prescribed pair of coordinate axes, the elements of R
ki
are thus constants that can easily be
computed.
From equation (A.19) it follows that the coordinate transformation for rst-order tensors (vec-
tors) is thus
x
k
= R
ki
x
i
(A.21)
where the free index is the rst one appearing on R.
We next seek the inverse transformation. Beginning again with equation (A.17), we write
e
k
(x
i
e
i
) = e
k
_
x
j
e
j
_
(A.22)
Thus,
x
i
ki
= x
j
e
k
e
j
(A.23)
or
x
k
= R
jk
x
j
(A.24)
The free index is now the second one appearing on R.
.
Remark
1. In both above transformations, the second index on R is associated with the unprimed system.
.
In order to gain insight into the direction cosines R
ij
, we dierentiate equation (A.21) with
respect to x
i
giving (with due change of dummy indices),
x
m
x
i
= R
mj
x
j
x
i
= R
mj
ji
= R
mi
(A.25)
A.5. Transformation Laws for Cartesian Tensors 9
We next dierentiate equation (A.24) with respect to x
m
, giving
x
k
x
m
= R
jk
x
j
x
m
= R
jk
jm
= R
mk
(A.26)
Using the chain rule, it follows that
x
k
x
i
=
ki
=
x
k
x
m
x
m
x
i
= R
mk
R
mi
(A.27)
In direct notation this is written as
I = R
T
R (A.28)
implying that the R are orthogonal tensors (i.e, R
1
= R
T
). Linear transformations such as those
given by equations (A.21) and (A.24), whose direction cosines satisfy the above equation, are thus
called orthogonal transformations.
The transformation rules for second-order Cartesian tensors are derived in the following manner.
Let S be a second-order Cartesian tensor, and let
u = Sv (A.29)
in the unprimed coordinates. Similarly, in primed coordinates let
u
= S
(A.30)
Next we desire to relate S to S
and v
to give
Ru = S
Rv (A.31)
But from equation (A.29)
Ru = RSv (A.32)
implying that
RSv = S
Rv (A.33)
Since v is an arbitrary vector, and since R is an orthogonal tensor, it follows that
S
= RSR
T
or S
ij
= R
ik
R
jl
S
kl
(A.34)
In a similar manner,
S = R
T
S
R or S
ij
= R
mi
R
nj
S
mn
(A.35)
The transformation rules for higher-order tensors are obtained in a similar manner. For example,
for tensors of rank three,
A
ijk
= R
il
R
jm
R
kn
A
lmn
(A.36)
and
A
ijk
= R
li
R
mj
R
nk
A
lmn
(A.37)
10 A. Brief Review of Tensors
Finally, the fourth-order Cartesian tensor C transforms according to the following relations:
C
ijkl
= R
ip
R
jq
R
kr
R
ls
C
pqrs
(A.38)
and
C
ijkl
= R
pi
R
qj
R
rk
R
sl
C
pqrs
(A.39)
A.6 Principal Values and Principal Directions
In the present discussion, only symmetric second-order tensors with real components are considered.
For every symmetric tensor A, dened at some point in space, there is associated with each direction
(specied by the unit normal n) at the point, a vector given by the inner product
v = An (A.40)
This is shown schematically in Figure A.2.
v= An
n
Figure A.2: Normal Direction Associated with the Vector v
.
Remark
1. A may be viewed as a linear vector operator that produces the vector v conjugate to the
direction n.
.
If v is parallel to n, the above inner product may be expressed as a scalar multiple of n; viz.,
v = An = n or A
ij
n
j
= n
i
(A.41)
The direction n
i
is called a principal direction, principal axis or eigenvector of A.
Substituting the relationship n
i
=
ij
n
j
into equation (A.41) leads to the following characteristic
equation of A:
(AI) = 0 or (A
ij
ij
) n
j
= 0 (A.42)
For a non-trivial solution, the determinant of the coecients must be zero; viz.,
det (AI) = 0 or
A
ij
ij
= 0 (A.43)
A.6. Principal Values and Principal Directions 11
This is called the characteristic equation of A. In light of the symmetry of A, the expansion of
equation (A.43) gives
(A
11
) A
12
A
13
A
12
(A
22
) A
23
A
13
A
23
(A
33
)
= 0 (A.44)
The evaluation of this determinant leads to a cubic polynomial in , known as the characteristic
polynomial of A; viz.,
I
1
2
+
I
2
I
3
= 0 (A.45)
where
I
1
= tr (A) = A
11
+ A
22
+ A
33
= A
kk
(A.46)
I
2
=
1
2
(A
ii
A
jj
A
ij
A
ij
) (A.47)
I
3
= det (A) (A.48)
The scalar coecients
I
1
,
I
2
and
I
3
are called the rst, second and third invariants, respectively,
derived from the characteristic equation of A.
The three roots
_
(i)
; i = 1, 2, 3
_
of the characteristic polynomial are called the principal values
or eigenvalues of A. Associated with each eigenvalue is an eigenvector n
(i)
. For a symmetric tensor
with real components, the principal values are real. If the three principal values are distinct, the
three principal directions are mutually orthogonal. When referred to principal axes, A assumes a
diagonal form; viz.,
A =
_
_
(1)
0
0
(2)
0
0 0
(3)
_
_
(A.49)
.
Remark
1. Eigenvalues and eigenvectors have a useful geometric interpretation in two- and three-
dimensional space. If is an eigenvalue of A corresponding to v, then Av = v, so that depending
on the value of , multiplication by A dilates v (if > 1), contracts v (if 0 < < 1), or reverses
the direction of v (if < 0).
.
Example 1: Invariants of First-Order Tensors
Consider a vector v. If the coordinate axes are rotated, the components of v will change.
However, the length (magnitude) of v remains unchanged. As such, the length is said to be invariant.
In fact a vector (rst-order tensor) has only one invariant, its length.
.
Example 2: Invariants of Second-Order Tensors
12 A. Brief Review of Tensors
A second order tensor possesses three invariants. Denoting the tensor by A, its invariants are
(these dier from the ones derived from the characteristic equation of A)
I
1
= tr (A) = A
11
+ A
22
+ A
33
= A
kk
(A.50)
I
2
=
1
2
tr
__
A
2
_
=
1
2
A
ik
A
ki
(A.51)
I
3
=
1
3
tr
__
A
3
_
=
1
3
A
ik
A
kj
A
ji
(A.52)
Any function of the invariants is also an invariant. To verify that the rst invariant is unchanged
under coordinate transformation, recall that
A
ij
= R
ik
R
jl
A
kl
(A.53)
Thus,
A
mm
= R
mk
R
ml
A
kl
=
kl
A
kl
= A
kk
(A.54)
For the second invariant,
A
ik
A
ki
= (R
il
R
km
A
lm
) (R
kn
R
ip
A
np
)
= R
il
R
ip
A
lm
R
km
R
kn
A
np
=
lp
A
lm
mn
A
np
= A
pm
A
mp
(A.55)
Finally, for the third invariant,
A
ik
A
km
A
mi
= (R
il
R
kp
A
lp
) (R
kn
R
mq
A
nq
) (R
ms
R
it
A
st
)
= R
il
R
it
A
lp
R
kp
R
kn
A
nq
R
mq
R
ms
A
st
=
lt
A
lp
pn
A
nq
qs
A
st
= A
tp
A
pq
A
qt
(A.56)
.
A.7. Tensor Calculus 13
A.7 Tensor Calculus
Several important dierential operators are summarized below.
Gradient Operator
The linear dierential operator
=
x
1
e
1
+
x
2
e
2
+
x
3
e
3
= e
i
x
i
(A.57)
is called the gradient or del operator.
Gradient of a Scalar Field
Let (x
1
, x
2
, x
3
) be a scalar eld. The gradient of is the vector with components
= grad = e
i
x
i
= e
i
,i
(A.58)
If n = n
i
e
i
is a unit vector, the scalar operator
n = n
i
x
i
(A.59)
is called the directional derivative operator in the direction n.
Divergence of a Vector Field
Let v(x
1
, x
2
, x
3
) be a vector eld. The scalar quantity
v = div v =
v
i
x
i
= v
i,i
(A.60)
is called the divergence of v.
Curl of a Vector Field
Let u(x
1
, x
2
, x
3
) be a vector eld. The vector quantity
u = curl u =
ijk
u
k
x
j
e
i
=
ijk
u
k,j
e
i
(A.61)
is called the curl of u.
.
Remark
1. When using u
k,j
for u
k
/x
j
, the indices are reversed in order as compared to the denition
of the vector (cross) product; that is,
v =
ijk
v
k,j
e
i
(A.62)
whereas u v =
kij
u
i
v
j
e
k
.
.
14 A. Brief Review of Tensors
The Laplacian Operator
The Laplacian operator is dened by
2
( ) = div grad ( ) = ( ) =
2
( )
x
i
x
i
= ( )
,ii
(A.63)
Let (x
1
, x
2
, x
3
) be a scalar eld. The Laplacian of is then
2
=
_
x
i
e
i
_
(
,j
e
j
)
=
2
x
j
x
i
(e
i
e
j
) =
,ji
ij
=
,ii
(A.64)
Let v(x
1
, x
2
, x
3
) be a vector eld. The Laplacian of v is the following vector quantity:
2
v =
2
u
k
x
i
x
i
e
k
= u
k,ii
e
k
(A.65)
.
Remark
1. An alternate statement of the Laplacian of a vector is
2
v = ( v) (v) (A.66)
.
References
[1] Fung, Y. C., A First Course in Continuum Mechanics, Second Edition. Englewood Clis, NJ:
Prentice Hall (1977).
[2] Joshi, A. W., Matrices and Tensors in Physics, 2nd Edition. A Halsted Press Book, New York:
J. Wiley and Sons (1984).
[3] Mase, G. E., Continuum Mechanics, Schaums Outline Series. New York: McGraw-Hill Book Co.
(1970).
[4] Sokolniko, I. S., Tensor Analysis, Theory and Applications. New York: J. Wiley and Sons
(1958).
15