Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
ACKNOWLEDGMENT
I would like to thank sincerely to Mr. SHADAB and Mr. MUKUL who
gave me the golden opportunity to do this wonderful seminar on topic
ADAPTIVE CONTROL, which also help me in doing a lot of research and I
came to know so about so many things.
I would like to thank sincerely to my guide Mr. D.D. SHARMA for his
invaluable guidance, constant assistance, support and constructive suggestion for
the betterment of the technical seminar.
I also wish to thank all my friends for helping me directly or indirectly in
completing this work successfully.
ADAPTIVE CONTROL
CONTENTS
ABSTRACT
3
INTRODUCTION
4
NEED FOR ADAPTIVE CONTROL
5
ADAPTIVE CONTROLLER
8
MODEL-REFERENCE ADAPTIVE CONTROL (MRAC)
9
SELF-TUNING CONTROLLERS (STC)
12
ADAPTIVE CONTROLLER: THE DESIGN
14
ERROR DYNAMICS
15
2
ADAPTIVE CONTROL
CONCEPT OF STABILITY
16
LOCAL AND GLOBAL STABILITY
18
PARAMETER ESTIMATION
23
CONCLUSION
24
REFERENCES
25
ADAPTIVE CONTROL
ABSTRACT
In this modern era, everyone (any individual person, any
industry etc.) want their daily task to be problem free
providing minimal amount of effort(efficient production of
goods with less amount of capital investment). In other
words, things (that could be manually controlled) are
tending to be automatic. But with this facility comes a
bag full of uncertain events. If we take a case of an
airline pilot it would be a benefit if pilot could take a
break to leave the whole control to any on-line
controller(for e.g. computer) so that chances of airline
being crashed could be minimized . It is so good to hear
that but the fact that the aerodynamic coefficients of the
atmosphere (mac number and altitude) vary to great
extent that the ordinary baseline controller would fail to
suffice for all the conditions. Here came the idea of
Adaptive Control.
ADAPTIVE CONTROL
INTRODUCTION
Many dynamic systems to be controlled have constant or
slowly-varying uncertain parameters. For instance, robot
manipulators may carry large objects with unknown
inertial parameters. Power systems may be subjected to
large variations in loading conditions. Fire-fighting aircraft
may experience considerable mass changes as they load
and unload large quantities of water. Adaptive control is
an approach to the control of such systems. The basic
idea in adaptive control is to estimate the uncertain plant
parameters (or, equivalently, the corresponding controller
parameters) on-line based on the measured system
signals, and use the estimated parameters in the control
input computation. An adaptive control system can thus
be regarded as a control system with on-line parameter
estimation. Adaptive control systems, whether developed
for linear plants or for nonlinear plants, are inherently
nonlinear.
Research in adaptive control started in the early 1950's in
connection with the design of autopilots for highperformance aircraft, which operate at a wide range of
Speed and Altitude and thus experience large parameter
variations and fixed gain controllers could not suffice for
all of the conditions so Adaptive control was proposed as
a way of automatically adjusting the controller
parameters in the face of changing aircraft dynamics. But
interest in the subject soon diminished due to the lack of
insights and the crash of a test flight.
These theoretical advances, together with the availability
of cheap computation, have lead to many practical
5
ADAPTIVE CONTROL
applications, in areas such as robotic manipulation,
aircraft and rocket control, chemical processes, power
systems, ship steering, and bioengineering.
ADAPTIVE CONTROL
used and the load parameters are not accurately known,
robot motion can be either inaccurate or unstable.
Adaptive control, on the other hand, allows robots to
move loads of unknown parameters with high speed and
high accuracy.
Ship steering
On long courses, ships are usually put under
automatic steering. However, the dynamic characteristics
of a ship strongly depend on many uncertain parameters,
such as water depth, ship loading, and wind and wave
conditions (Fig.2). Adaptive control can be used to
achieve good control performance under varying
operating conditions, as well as to avoid energy loss due
to excessive rudder motion.
ADAPTIVE CONTROL
Aircraft control
The dynamic behavior of an aircraft depends
on its altitude, speed, and configuration. The ratio of
variations of some parameters can lie between 10 to 50
in a given flight. As mentioned earlier, adaptive control
was originally developed to achieve consistent aircraft
performance over a large flight envelope.
Process control
ADAPTIVE CONTROL
control systems are often used to handle time-varying
unknown parameters. In order for the analysis results to
be applicable to these practical cases, the time-varying
plant parameters must vary considerably slower than the
parameter adaptation. Fortunately, this is often satisfied
in practice. Note that fast parameter variations may also
indicate that the modeling is inadequate and that the
dynamics causing the parameter changes should be
additionally modeled.
ADAPTIVE CONTROL
ADAPTIVE CONTROLLER
An adaptive controller differs from an ordinary controller
in that the controller parameters are variable, and there
is a mechanism for adjusting these parameters online
based on signals in the system. There are two main
approaches for constructing adaptive controllers. One is
the so-called model-reference adaptive control method,
and the other is the so-called self-tuning method.
ADAPTIVE CONTROL
11
ADAPTIVE CONTROL
MODEL-REFERENCE ADAPTIVE
CONTROL (MRAC)
Generally, a model-reference adaptive control system can
be schematically represented .It is composed of four
parts: a plant containing unknown parameters, a
reference model for compactly specifying the desired
output of the control system, a feedback control law
containing adjustable parameters, and an adaptation
mechanism for updating the adjustable parameters.
The plant is assumed to have a known structure, although
the parameters are unknown. For linear plants, this
means that the number of poles and the number of zeros
are assumed to be known, but that the locations of these
poles and zeros are not.
ADAPTIVE CONTROL
the parameters. The choice of the reference model is part
of the adaptive control system design. This choice has to
satisfy two requirements. On the one hand, it should
reflect the performance specification in the control tasks,
such as rise time, settling time, overshoot or frequency
domain characteristics. On the other hand, this ideal
behavior should be achievable for the adaptive control
system, i.e., there are some inherent constraints on the
structure of the reference model (e.g., its order and
relative degree) given the assumed structure of the plant
model.
The controller is usually parameterized by a number of
adjustable parameters (implying that one may obtain a
family of controllers by assigning various values to the
adjustable parameters). The controller should have
perfect tracking capacity in order to allow the possibility
of tracking convergence. That is, when the plant
parameters are exactly known, the corresponding
controller parameters should make the plant output
identical to that of the reference model. When the plant
parameters are not known, the adaptation mechanism
will adjust the controller parameters so that perfect
tracking is asymptotically achieved. If the control law is
linear in terms of the adjustable parameters, it is said to
be linearly parameterized. Existing adaptive control
designs normally require linear parametrization of the
controller in order to obtain adaptation mechanisms with
guaranteed stability and tracking convergence.
The adaptation mechanism is used to adjust
parameters in the control law. In MRAC systems,
adaptation law searches for parameters such that
response of the plant under adaptive control becomes
13
the
the
the
the
ADAPTIVE CONTROL
same as that of the reference model, i.e., the objective of
the adaptation is to make the tracking error converge to
zero. Clearly, the main difference from conventional
control lies in the existence of this mechanism. The main
issue in adaptation design is to synthesize an adaptation
mechanism which will guarantee that the control system
remains stable and the tracking error converges to zero
as the parameters are varied. Much formalism in
nonlinear control can be used to this end, such as
Lyapunov theory, hyper-stability theory, and passivity
theory. Although the application of one formalism may be
more convenient than that of another, the results are
often equivalent.
14
ADAPTIVE CONTROL
ADAPTIVE CONTROL
Parameter estimation can be understood simply as the
process of finding a set of parameters that fits the
available input-output data from a plant. This is different
from parameter adaptation in MRAC systems, where the
parameters are adjusted so that the tracking errors
converge to zero. For linear plants, many techniques are
available to estimate the unknown parameters of the
plant. The most popular one is the least squares method
and its extensions. There are also many control
techniques for linear plants, such as pole-placement, PID,
LQR (linear quadratic control), minimum variance control,
or H designs. By coupling different control and
estimation schemes, one can obtain a variety of selftuning regulators. The self-tuning method can also be
applied to some nonlinear systems without any
conceptual difference.
In the basic approach to self-tuning control, one estimates
the plant parameters and then computes the controller
parameters. Such a scheme is often called indirect
adaptive control, because of the need to translate the
estimated parameters into controller parameters. It is
possible to eliminate this part of the computation. To do
this, one notes that the control law parameters and plant
parameters are related to each other for a specific control
method. This implies that we may re-parameterize the
plant model using controller parameters (which are also
unknown, of course), and then use standard estimation
techniques on such a model. Since no translation is
needed in this scheme, it is called a direct adaptive
control scheme. In MRAC systems, one can similarly
consider direct and indirect ways of updating the
controller parameters.
16
ADAPTIVE CONTROL
17
ADAPTIVE CONTROL
18
ADAPTIVE CONTROL
ERROR DYNAMICS
x (t)
x ' =f ( x )
Let
be the solution of
The nominal motion trajectory corresponding to initial
x (0)
conditions
x ( 0 )=x 0+ x 0
e ( t )=x ( t ) x (t)
e ' =f ( x ( t )+ e ( t ) )f ( x ( t ) )=g(e , t)
e ( 0 )=x
19
ADAPTIVE CONTROL
CONCEPT OF STABILITY
Nonlinear systems may have much more complex and
exotic behavior than linear systems; the mere notion of
stability is not enough to describe the essential features
of their motion.
Lyapunov Stability:
The equilibrium state x = 0 is said to be stable if, for any
R>0 , there exists r>0, such that if ||x(0)|| < r, then ||
x(f)|| <R for all t>0 . Otherwise, the equilibrium point is
unstable. Essentially, stability (also called stability in the
sense of Lyapunov or Lyapunov stability) means that the
system trajectory can be kept arbitrarily close to the
origin by starting sufficiently close to it. More formally,
the definition states that the origin is stable, if, given that
20
ADAPTIVE CONTROL
we do not want the state trajectory x(f) to get out of a
ball of arbitrarily specified radius BR , a value r(R) can be
found such that starting the state from within the ball BR.
at time 0 guarantees that the state will stay within the
ball BR thereafter.
Asymptotic Stability:
In many engineering applications, Lyapunov stability is
not enough. For example, when a satellite's attitude is
disturbed from its nominal position, we not only want the
satellite to maintain its attitude in a range determined by
the magnitude of the disturbance, i.e., Lyapunov stability,
but also require that the attitude gradually go back to its
original value. This type of engineering requirement is
captured by the concept of asymptotic stability.
Definition: An equilibrium point 0 is asymptotically
stable if it is stable, and if in addition there exists some r
> 0 such that || x(0) || < r implies that x(t) > 0 as t >
Exponential Stability:
An equilibrium point 0 is exponentially stable if there
exist two strictly positive numbers a and X such that
t >0
||x(t)|| a||x(0)||e-t
in some ball BR around the origin.
21
ADAPTIVE CONTROL
ADAPTIVE CONTROL
linear control guarantees the stability of the original
physical system locally.
ADAPTIVE CONTROL
ADAPTIVE CONTROL
by the damper until the mass settles down, i.e., until x =
0.
Physically, it is easy to see that the mass must finally
settle down at the natural length of the spring, because it
is subjected to a non-zero spring force at any position
other than the natural length.
The direct method of Lyapunov is based on a
generalization of the concepts in the above mass-springdamper system to more complex systems. Faced with a
set of nonlinear differential equations, the basic
procedure of Lyapunov's direct method is to generate a
scalar "energy-like" function for the dynamic system, and
examine the time variation of that scalar function. In this
way, conclusions may be drawn on the stability of the set
of differential equations without using the difficult
stability definitions or requiring explicit knowledge of
solutions.
ADAPTIVE CONTROL
A scalar continuous function V(x) is said to be locally
positive definite if V(0) = 0 and, in a ball BR
x 0 => V(x) > 0
if V(0) = 0 and the above property holds over the whole
state space, then V{x) is said to be globally positive
definite.
For example
26
ADAPTIVE CONTROL
derivative along any state trajectory of system is
negative semi-definite, i.e.,
V(x) < 0
then V(x) is said to be a Lyapunov function for the system
27
ADAPTIVE CONTROL
PARAMETER ESTIMATION
1)Indirect
estimate plant parameters
compute controller parameters
relies on convergence of the estimated parameters
to their true unknown values
2)Direct
no plant parameter estimation
estimate controller parameters (gains) only
3)MRAC and STC can be designed using both Direct and
Indirect approaches
CONCLUSION
28
ADAPTIVE CONTROL
29
ADAPTIVE CONTROL
REFERENCES
IEEE_Workshop_Slides_Lavretsky
J-J. E. Slotine and W. Li, Applied Nonlinear Control,
Prentice-Hall, New Jersey, 1991
S. Haykin, Neural Networks: A Comprehensive
Foundation, 2ndedition, Prentice-Hall, New Jersey,
1999
H. K., Khalil, Nonlinear Systems, 2ndedition, PrenticeHall, New Jersey, 2002
30