Sei sulla pagina 1di 4

ARTICLE DE FOND

FINITE ELEMENT ANALYSIS


ISSUES AND TRENDS
BY

IN

SOLID MECHANICS:

NADER G. ZAMANI

he area of Finite Element Analysis (FEA) has


become a standard tool in the numerical solution
of the field equations governing physical problems. These field equations arise in diverse areas
such as: solid mechanics, fluid dynamics, heat transfer,
electrostatics, and electromagnetism. Generally speaking,
these are coupled, nonlinear, and time dependent partial
differential equations which describe some form of conservation law. Depending of the nature of the field, the
conservation of momentum, energy, and charge are normally taken into account.
The concepts behind these field equations have been
known to the science community since the early 1800s.
These concepts are attributed to prominent physicists/mathematicians such as Euler, Lagrange, Laplace,
Bernoulli, and Fourier to name a few. Some of the approximate numerical schemes which are the basis of the FEA
approach are due to well known scientists such as
Raleigh, Ritz, and Galerkin. The breakthrough, however,
came about due to the development of high speed digital
computers. At that point, the early numerical schemes
were altered and modified making them efficient, accurate, and feasible for implementation on computers.

The first comprehensive numerical solution which


embraced FEA concepts in its modern form is attributed to
Courant [1] where he used piecewise linear polynomials on
a triangular mesh to solve the Laplace equation. Although
this work set the wheel in motion, it was not until the early
1960 where serious development in FEA started. It is not
surprising that this activity coincided with the development of high speed computers in the private sector. One of
the early projects along this path was the development of
the NASTRAN program which was created by NASA for
structural analysis [2]. This code that still exists, has gone
through continuous updating and improvements and prob-

SUMMARY
This expository paper discusses the area of
Finite Element Analysis (FEA) as pertaining
to the subject of solid mechanics. FEA as a
computational tool has evolved rapidly in the
past fifty years and continues to do so with
technological advances in the computer
industry. The paper briefly presents a historical background together with the current
status of the field, and the future trends.

ably is the most widely used FEA software for structures.


Due to the NASA connection, the NASTRAN code is in
the public domain and the source code can be acquired at
no cost to the user.

APPROPRIATE ELEMENT SELECTION


To avoid total generality, the rest of this expository article
focuses on the FEA formulation for solid mechanics applications. Other areas such as fluids, heat transfer, and electromagnetism follow the same track. In structural analysis,
the primary variable of interest is the displacement vector.
Once the displacements are determined, strains can be
computed, and based on the material response, the stresses are evaluated. For the sake of illustration, assume a linear behavior in kinematics and constitutive law.
Depending on the topological nature of the structure, the
three most common elements are solid, shell, and beam
elements which are symbolically displayed in Fig. 1. The
number of nodes is one of the factors determining the
accuracy of the results [3].

Fig. 1

(a) solid, (b) shell, (c) beam elements

If the topology is one dimensional (or a composite of onedimensional parts) such as the frame of building, or a communications tower, the beam elements have to be used. On
the other hand, if the topology is two-dimensional such as
a pressure vessel or aircraft wing, the shell elements covering surfaces are the most appropriate. Finally, for a
bulky object with no specific topological characteristics,
solid elements are commonly used In principle; every
structure can be modeled with solid elements, but the
demands on resources make it impractical even with
todays computing power. All these elements have to be
modified in one form or other to be able to handle special
situations such as: material incompressibility, material
plasticity and visoelasticity.

N.G. Zamani
<zamani@uwindsor.
ca>, Department of
Mechanical,
Automotive and
Materials Engineering
University of
Windsor, Windsor,
Ontario, Canada

LINEAR VS. NONLINEAR RESPONSE


In a finite element analysis, there are three sources of nonlinearity. These are labeled as geometric, material, and
contact [4]. For the special case of strictly linear problems,

LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. juin (printemps) 2008 ) C 55

FINITE ELEMENT ANALYSIS (ZAMANI)

the details of the code may vary, but all FEA codes basically
give the same results. This is assuming that the same elements
are used and the same numerical integration algorithm is
employed. The minor differences are due only to code implementation.
A geometric nonlinearity refers to the case of large displacements, large rotations, and large strains. These are considered
to be mild nonlinearities which can easily be handled with a
good iterative solver. All nonlinearities require an iteration
approach for the numerical solution. Such algorithms are variations of the Newton-Raphson method or its secant implementation. For a mnemonic of this behavior, see Fig. 2(a).
The material response is also known as the constitutive law.
This represents the relationship between the stress and the
strain (or force and deflection). Most materials display a linear
response in a very narrow range. To give the reader a better
idea, consider the stretching of a rubber band. For small forces,
the relationship between the applied force and the resulting
stretch is linear. However, very quickly this linearity is lost and
a rather complicated path is traversed. This is an example of a
nonlinear elastic response. The situation in plasticity is considerably more complicated but falls into the category of nonlinear constitutive response. Material nonlinearity is also considered to be a mild nonlinearity. For a mnemonic of this behavior, see Fig. 2(b).
Presently, the majority of commercial FEA codes are capable
of handling both geometric and material nonlinearities. The
degree of their performance (in terms of efficiency and accuracy) varies from code to code. Furthermore, some codes have a
vast database of material properties which could be preferred
by the users.

Fig. 2

(a) nonlinear geometry, (b) nonlinear material, (c) nonlinear


contact

The most severe type of nonlinearity is generally due to contact


condition. Basically, any type of metal forming application
such as forging, stamping, and casting require contact calculations, see Fig. 2(c). The mathematical tools for handling contact algorithms involve the Lagrange multiplier and/or constrained optimization. A poor formulation often leads to lack of
convergence and other numerical difficulties.

STATIC VS. DYNAMIC RESPONSE


Technically speaking, all loads are dynamic (time dependent)
in a real world environment. The main issue is whether the
inertia effect (mass times acceleration) is significant compared
to other loads. In a nutshell, this has to do with the duration of
the applied load compared to the natural periods (inverse of the

56 C PHYSICS

IN

natural frequencies) of the structure [5]. For example, an impact


load on many occasions leads to a substantial inertia effect. In
terms of dynamic analysis, commercial FEA packages give the
user several options for carrying out the calculations. This is
schematically represented by Fig 3. For linear problems, the
modal superposition is generally available. The user can select
the number of modes and therefore control the accuracy of the
results.
One can also use the full history analysis by numerically integrating the equations of motion in time. The term full time history analysis refers to the fact that the governing field equation
is a time dependent differential equation. Therefore, the
unknowns are also time dependent. This system of differential
equations has to be solved numerically by an approximate integration it time. Clearly, in nonlinear problems, this is the
default approach. A variety of integration routines is available
but the most common ones are the central differencing and the
Newmark method. The former is conditionally stable whereas
the latter is unconditionally stable.

Fig. 3

(a) Static response for a slowly varying load, (b) Dynamic


response for a fast varying load

IMPLICIT VS. EXPLICIT FORMULATION


There are two finite element methodologies in solid mechanics.
These are known as the Explicit and Implicit methodologies [6].
The term Explicit refers to the fact that when numerical integration in time is carried out, a predicted entity can be written
directly in terms of the past values without actually solving a
system of algebraic equations. Whereas in the Implicit
approach, in order to calculate a predicted value, one is bound
to solve a system of equations and therefore more computation
is involved. Both are designed to solve (integrate) the equations
of motion in time. The equation of motion for the linear case
can be stated as [M]{x}+[C]{x}+[K]{x}={F(t)}. Here [M], [C],
and [K] are the mass, damping and stiffness matrices respectively. The vector represents the displacement vector and
{F(t)} is the vector of applied loads.
The methodology used depends on the nature of the application
being considered. Generally speaking, short duration events
such as metal forming, crashworthiness, and detonation require
explicit codes. In such codes, the mass matrix is approximated
to become diagonal, and the central differencing method is
used for time integration. The stiffness matrix is not stored in
its entirety at every time step and no iterations are carried at
each time step. However, the conditional stability of central
differencing requires an extremely small time step selection.
There are very few explicit FEA codes and they require consid-

CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )

FINITE ELEMENT ANALYSIS (ZAMANI)AA

erable computing resources.


The majority of the existing commercial FEA codes however
are based on an implicit formulation. This is not surprising as
the bulk of the design problems in engineering and product
developments can satisfactorily be handled with the implicit
FEA formulation. There is another important difference
between the implicit and explicit codes. In nonlinear problems,
implicit codes require substantial iteration steps. If the conditions are not realistic, the solution usually diverges and the user
is informed. However, in explicit calculations, since no iteration is involved, the software always arrives at a solution. The
difficulty is that this may not be the solution to the problem
under consideration.
It is worth mentioning that problems which ordinarily can be
solved with an implicit code can also be solved with an explicit one. However, the extremely small time step will dictate an
unreasonable solution time. The remedy is referred to as the
mass scaling option that is available in explicit codes. In this
option, the density of the material is artificially changed to
result in an attainable run time. One should carefully check the
energy history to make sure that the results are not contaminated by non-physical effects.

MESH ADEQUACY AND REFINEMENT


One of the most common questions when dealing with finite
element analysis is how small a mesh should be used for a
desirable accuracy. The user should be reminded that one cannot provide an answer without performing a mesh convergence
study. Basically, making a single run regardless of how small
the elements are; provides no information on the accuracy. The
key is in making a sequence of runs with decreasing element
size and comparing the differences in the results. Of course the
refinement should be performed in the critical regions and the
comparison of the results should also be made in the critical
locations. When the percentage change is to the users satisfaction, the mesh is assumed to be satisfactory. It is well known
that displacements converge faster that the stresses, however,
the latter entities are more important. Therefore, the convergence should be based on the stress variable.
The strategy above is known as h refinement. There are two
other strategies known as the r and p methods [7]. In the
p method, the mesh is fixed but the degree of the approximating polynomial is increasing. Although the p strategy displays promising results in linear problems, it is not available in
most commercial codes.
The final refinement strategy is the so called r method.
There, the number of nodes (and elements) is fixed. However,
their locations are adaptively changed to reduce the error estimate. This method has also been implemented for the
Boundary Element Method for linear problems [8]. Currently,
most commercial FEA codes have an adaptive (automatic)
mesh refinement capability for solving linear problems.
Sophisticated error estimators are used to perform the refinement strategies [9].

SOURCES OF ERROR IN AN FEA


CALCULATION
Understanding the sources of error in a finite element calculation is vital to obtaining good results [10]. In this section, these
sources are briefly described. The most obvious source is the
mathematical model that is expected to represent a physical
phenomenon. This source is beyond the control of the typical
user. Engineers and physicists are primarily responsible in
arriving at an accurate model. The second source is the approximation of the physical domain with the finite element model.
If the boundaries of the domain are curved surfaces, finite elements may only approximately represent this domain as shown
in Fig. 4(a). The use of higher order elements can reduce this
error. Mesh refinement will also improve the error. The interpolation error is displayed in Fig. 4(b). The nature of the shape
functions dictates how well the finite element functional variation approximates the exact solution. Higher order elements
approximate the exact solution more accurately.
The error in numerical integration is also a critical factor in
controlling the error. This is symbolically displayed in Fig. 4(c)
where the area under a curve represented by an integral is
approximated by the area of the trapezoid. There are however
circumstances where intentionally some error is introduced in
the integration process. This eliminates the possibility of unrealistically stiff structures [11]. The final source of error is the
mathematical round-off which could dramatically affect the
results. There are different reasons for this undesirable effect.
Among the reasons are the single precision calculations,
extreme mesh transition, and hard/soft regions being present [12].

Fig. 4

(a) physical domain approximated by the finite element


domain, (b) exact solution approximated by the finite element solution, (c) area under curve approximated by area
under line

OPTIMIZATION
The primary role of a commercial finite element package is to
perform analysis. However, the ability to perform analysis naturally leads to the idea of optimization. In this situation, the
objective function, constraints and design variables are defined
first. A sequence of analysis is performed which systematically updates the design variables such that the objective function
is optimized [13]. The optimization calculations can be based on
the gradient methods or more recent approaches such as the
Genetic algorithm [14]. Most recent commercial codes have an
optimization module. To give a concrete example of how optimization is used, consider the design of a loaded part to have a

LA PHYSIQUE AU CANADA / Vol. 64, No. 2 ( avr. juin (printemps) 2008 ) C 57

FINITE ELEMENT ANALYSIS (ZAMANI)

minimum weight, where the von Mises stress is to remain


below the yield strength of the material.

VECTORIZATION FOR MULTIPROCESSING


The mainframe supercomputers appeared in the market about
twenty-five years ago. This prompted the FEA software companies to revise and reexamine their codes to run efficiently on
these machines. It mainly consisted of vectorizing their codes
to utilize the multiprocessor nature of the supercomputers. The
multiprocessing capability has recently been introduced in the
personal computer (PC) market. Currently, the major FEA software has separate installations which allow them to use a number of processors. Naturally, the licensing cost for such versions is more expensive than for a single processor version.

PRE AND POSTPROCESSING CAPABILITIES


It was not very long ago that commercial FEA software relied
solely on third party pre- and pos-processors. The finite element software companies primary put their efforts on the
solver module. This caused a great deal of inconvenience for
the user who needed to invest additional time to train in separate software. This was particularly troublesome when FEA
software were marketed to operate on the personal computers.
The turn around solution was achieved by two approaches.
Some FE software where modified to have a proprietary preand post-processor written from scratch to handle the needs,
while others incorporated third party codes and integrated them
with the solver module. This allowed the user to seamlessly
perform the pre- and post-processing, and run the finite element analysis simultaneously. Currently, all commercial FEA
software packages have their own pre- and post-processors.
They also have the flexibility of transferring data to and from
third party software.
Mesh generation still remains a challenging issue in a preprocessor. This is particularly the case when the geometry
under consideration is complicated. As an example, one can
visualize the meshing of a full automobile engine block.
Creating a free mesh using tetrahedron elements is now feasible regardless of the complexity of the geometry. However, if
all hexahedral elements are needed, the situation is not completely satisfactory. Mesh generation remains an active
research area in applied mathematics.

FEA AND CAD SOFTWARE INTEGRATION


A large number of general purpose commercial FEA software
packages has been developed and made available in the public
domain since early 1970s. This is also the case with CAD
packages which are widely used in industry. Clearly, the spectrum of the CAD software is rather wide depending on their
capabilities. These packages are traditionally used by the so
called designers who are not formally trained in physics or
engineering. Their experience is gained by on-the-job training
and they usually act as the interface between the production
(fabrication) and the engineering divisions.
The global trend is the elimination of such positions and
replacing them with qualified engineers or physicists. This has
prompted the integration of FEA modules in standalone CAD
packages. The numbers of CAD and FEA software packages
have dwindled in the past decade and now there are a handful
of fully integrated CAD/FEA packages which are referred to as
CAE software. In this context, CAE also embraces
Computational Fluid Dynamics (CFD) modules. Therefore, the
analysis capabilities are seamlessly integrated with CAD features. The cycle does not end at this stage and in most cases are
directly linked to the Computer Aided Manufacturing (CAM)
area which is the end of the product development cycle. It is
expected that this tend will continue with only a few fully integrated CAE software packages handling the entire design
process.

CLOSING REMARKS
One of the important points that is being raised in this expository article is to emphasize that not all FEA software packages
are the same. The user should clearly identify the needs for
his/her analysis. The decision should also factor the CAD
requirements. The cost of the software has a direct link to the
capabilities of the acquisition. Another factor which should be
seriously taken into account is the type and level of the technical support available for the CAE software. One should not
assume that the softwares documentation is sufficient and well
enough written for an average reader. This could be a major
issue, as training courses can be extremely expensive, and in
some cases not even available. Online searches and sharing
information with other users can be of great value to decide
which software fits the users needs.

REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.

R. Courant. Bull. Am. Math. Soc., 49,1 (1943).


R.H. MacNeal, NASTRAN Theoretical Manual, NASA-SP, 221, 3 (1976).
R.D. Cook, Finite Element Modelling for Stress Analysis, John Wiley and Sons (1995).
K.J. Bathe, Finite Element Procedures, Prentice Hall (1996).
T.J. Hughes, Linear Static and Dynamic Finite Element Analysis, Prentice Hall (1987)
T. Belytschko, W.K. Liu, and B. Moran, Nonlinear Finite Elements for Continua and Structures, John Wiley and Sons (2000).
J.N. Reddy, Introduction to the Finite Element Mathod, McGraw Hill (2006)
N.G. Zamani and W. Sun, IJNME, 44, 3 (1991).
B. Szabo and I. Babuska, Finite Element Analysis, John Wiley and Sons (1991).
P.G. Ciarlet, The Finite Element Method for Elliptic Problems, North Holland (1978).
O.C. Zienkiewicz and R.L. Taylor, The Finite Element Method, McGraw Hill (1988).
G. Strang and G. Fix, An Analysis of the Finite Element Method, Prentice Hall (1973).
S. Moaveni, Finite Element Analysis, Theory and Applications with ANSYS, Prentice Hall (2008).
G. Lindfield and J. Penny, Numerical Methods Using MATLAB, Prentice Hall (2000).

58 C PHYSICS

IN

CANADA / VOL. 64, NO. 2 ( Apr.-June. (Spring) 2008 )

Potrebbero piacerti anche