Sei sulla pagina 1di 30

ADAPTIVE CONTROL

ACKNOWLEDGMENT
I would like to thank sincerely to Mr. SHADAB and Mr. MUKUL who
gave me the golden opportunity to do this wonderful seminar on topic
ADAPTIVE CONTROL, which also help me in doing a lot of research and I
came to know so about so many things.
I would like to thank sincerely to my guide Mr. D.D. SHARMA for his
invaluable guidance, constant assistance, support and constructive suggestion for
the betterment of the technical seminar.
I also wish to thank all my friends for helping me directly or indirectly in
completing this work successfully.

VIPUL KUMAR SHARMA


12EE56

ADAPTIVE CONTROL

CONTENTS
ABSTRACT
3
INTRODUCTION
4
NEED FOR ADAPTIVE CONTROL
5
ADAPTIVE CONTROLLER
8
MODEL-REFERENCE ADAPTIVE CONTROL (MRAC)
9
SELF-TUNING CONTROLLERS (STC)
12
ADAPTIVE CONTROLLER: THE DESIGN
14
ERROR DYNAMICS
15
2

ADAPTIVE CONTROL
CONCEPT OF STABILITY
16
LOCAL AND GLOBAL STABILITY
18
PARAMETER ESTIMATION
23
CONCLUSION

24

REFERENCES

25

ADAPTIVE CONTROL

ABSTRACT
In this modern era, everyone (any individual person, any
industry etc.) want their daily task to be problem free
providing minimal amount of effort(efficient production of
goods with less amount of capital investment). In other
words, things (that could be manually controlled) are
tending to be automatic. But with this facility comes a
bag full of uncertain events. If we take a case of an
airline pilot it would be a benefit if pilot could take a
break to leave the whole control to any on-line
controller(for e.g. computer) so that chances of airline
being crashed could be minimized . It is so good to hear
that but the fact that the aerodynamic coefficients of the
atmosphere (mac number and altitude) vary to great
extent that the ordinary baseline controller would fail to
suffice for all the conditions. Here came the idea of
Adaptive Control.

ADAPTIVE CONTROL

INTRODUCTION
Many dynamic systems to be controlled have constant or
slowly-varying uncertain parameters. For instance, robot
manipulators may carry large objects with unknown
inertial parameters. Power systems may be subjected to
large variations in loading conditions. Fire-fighting aircraft
may experience considerable mass changes as they load
and unload large quantities of water. Adaptive control is
an approach to the control of such systems. The basic
idea in adaptive control is to estimate the uncertain plant
parameters (or, equivalently, the corresponding controller
parameters) on-line based on the measured system
signals, and use the estimated parameters in the control
input computation. An adaptive control system can thus
be regarded as a control system with on-line parameter
estimation. Adaptive control systems, whether developed
for linear plants or for nonlinear plants, are inherently
nonlinear.
Research in adaptive control started in the early 1950's in
connection with the design of autopilots for highperformance aircraft, which operate at a wide range of
Speed and Altitude and thus experience large parameter
variations and fixed gain controllers could not suffice for
all of the conditions so Adaptive control was proposed as
a way of automatically adjusting the controller
parameters in the face of changing aircraft dynamics. But
interest in the subject soon diminished due to the lack of
insights and the crash of a test flight.
These theoretical advances, together with the availability
of cheap computation, have lead to many practical
5

ADAPTIVE CONTROL
applications, in areas such as robotic manipulation,
aircraft and rocket control, chemical processes, power
systems, ship steering, and bioengineering.

NEED FOR ADAPTIVE CONTROL


In some control tasks, such as those in robot
manipulation, the systems to be controlled have
parameter uncertainty at the beginning of the control
operation. Unless such parameter uncertainty is gradually
reduced on-line by an adaptation or estimation
mechanism, it may cause inaccuracy or instability for the
control systems. In many other tasks, such as those in
power systems, the system dynamics may have well
known dynamics at the beginning, but experience
unpredictable parameter variations as the control
operation goes on. Without continuous "redesign" of the
controller, the initially appropriate controller design may
not be able to control the changing plant well. Generally,
the basic objective of adaptive control is to maintain
consistent performance of a system in the presence of
uncertainty or unknown variation in plant parameters.
Since such parameter uncertainty or variation occurs in
many practical problems, adaptive control is useful in
many industrial contexts. This includes:
Robot manipulation
Robots have to manipulate loads of various
sizes, weights, and mass distributions (fig.1). It is very
restrictive to assume that the inertial parameters of the
loads are well known before a robot picks them up and
moves them away. If controllers with constant gains are
6

ADAPTIVE CONTROL
used and the load parameters are not accurately known,
robot motion can be either inaccurate or unstable.
Adaptive control, on the other hand, allows robots to
move loads of unknown parameters with high speed and
high accuracy.

Figure 1 A robot carrying a load of uncertain mass properties

Ship steering
On long courses, ships are usually put under
automatic steering. However, the dynamic characteristics
of a ship strongly depend on many uncertain parameters,
such as water depth, ship loading, and wind and wave
conditions (Fig.2). Adaptive control can be used to
achieve good control performance under varying
operating conditions, as well as to avoid energy loss due
to excessive rudder motion.

ADAPTIVE CONTROL

Figure 2 A freight ship under various loadings and sea conditions

Aircraft control
The dynamic behavior of an aircraft depends
on its altitude, speed, and configuration. The ratio of
variations of some parameters can lie between 10 to 50
in a given flight. As mentioned earlier, adaptive control
was originally developed to achieve consistent aircraft
performance over a large flight envelope.
Process control

Models for metallurgical and chemical


processes are usually complex and also hard to obtain.
The parameters characterizing the processes vary from
batch to batch. Furthermore, the working -conditions are
usually time-varying {e.g., reactor -characteristics vary
during the reactor's life, the raw -materials entering the
process are never exactly the same, atmospheric and
climatic conditions also tend to change). In fact, process
control is one of the most important and active
application areas of adaptive control. To gain insights
about the behavior of the adaptive control systems and
also to avoid mathematical difficulties, we shall assume
the unknown plant parameters are constant in analyzing
the adaptive control designs. In practice, the adaptive
8

ADAPTIVE CONTROL
control systems are often used to handle time-varying
unknown parameters. In order for the analysis results to
be applicable to these practical cases, the time-varying
plant parameters must vary considerably slower than the
parameter adaptation. Fortunately, this is often satisfied
in practice. Note that fast parameter variations may also
indicate that the modeling is inadequate and that the
dynamics causing the parameter changes should be
additionally modeled.

ADAPTIVE CONTROL

ADAPTIVE CONTROLLER
An adaptive controller differs from an ordinary controller
in that the controller parameters are variable, and there
is a mechanism for adjusting these parameters online
based on signals in the system. There are two main
approaches for constructing adaptive controllers. One is
the so-called model-reference adaptive control method,
and the other is the so-called self-tuning method.

Robust control can also be used to deal


with parameter uncertainty:
Thus, one may naturally wonder about the differences
and relations between the robust approach and the
adaptive approach. In principle, adaptive control is
superior to robust control in dealing with uncertainties in
constant or slowly-varying parameters. The basic reason
lies in the learning behavior of adaptive control systems:
an adaptive controller improves its performance as
adaptation goes on, while a robust controller simply
attempts to keep consistent performance. Another reason
is that an adaptive controller requires little or no a priori
information about the unknown parameters, while a
robust controller usually requires reasonable a priori
estimates of the parameter bounds.
Conversely, robust control has some desirable features
which adaptive control does not have, such as its ability
to deal with disturbances, quickly varying parameters,
and un modeled robust adaptive controllers in which
uncertainties on constant or slowly-varying parameters is
reduced by parameter adaptation and other sources of
uncertainty are handled by robustification techniques.
10

ADAPTIVE CONTROL

11

ADAPTIVE CONTROL

MODEL-REFERENCE ADAPTIVE
CONTROL (MRAC)
Generally, a model-reference adaptive control system can
be schematically represented .It is composed of four
parts: a plant containing unknown parameters, a
reference model for compactly specifying the desired
output of the control system, a feedback control law
containing adjustable parameters, and an adaptation
mechanism for updating the adjustable parameters.
The plant is assumed to have a known structure, although
the parameters are unknown. For linear plants, this
means that the number of poles and the number of zeros
are assumed to be known, but that the locations of these
poles and zeros are not.

Figure 3 A Model Reference Adaptive Control System

For nonlinear plants, this implies that the structure of the


dynamic equations is known, but that some parameters
are not.
A reference model is used to specify the ideal
response of the adaptive control system to the external
command. Intuitively, it provides the ideal plant response
which the adaptation mechanism should seek in adjusting
12

ADAPTIVE CONTROL
the parameters. The choice of the reference model is part
of the adaptive control system design. This choice has to
satisfy two requirements. On the one hand, it should
reflect the performance specification in the control tasks,
such as rise time, settling time, overshoot or frequency
domain characteristics. On the other hand, this ideal
behavior should be achievable for the adaptive control
system, i.e., there are some inherent constraints on the
structure of the reference model (e.g., its order and
relative degree) given the assumed structure of the plant
model.
The controller is usually parameterized by a number of
adjustable parameters (implying that one may obtain a
family of controllers by assigning various values to the
adjustable parameters). The controller should have
perfect tracking capacity in order to allow the possibility
of tracking convergence. That is, when the plant
parameters are exactly known, the corresponding
controller parameters should make the plant output
identical to that of the reference model. When the plant
parameters are not known, the adaptation mechanism
will adjust the controller parameters so that perfect
tracking is asymptotically achieved. If the control law is
linear in terms of the adjustable parameters, it is said to
be linearly parameterized. Existing adaptive control
designs normally require linear parametrization of the
controller in order to obtain adaptation mechanisms with
guaranteed stability and tracking convergence.
The adaptation mechanism is used to adjust
parameters in the control law. In MRAC systems,
adaptation law searches for parameters such that
response of the plant under adaptive control becomes
13

the
the
the
the

ADAPTIVE CONTROL
same as that of the reference model, i.e., the objective of
the adaptation is to make the tracking error converge to
zero. Clearly, the main difference from conventional
control lies in the existence of this mechanism. The main
issue in adaptation design is to synthesize an adaptation
mechanism which will guarantee that the control system
remains stable and the tracking error converges to zero
as the parameters are varied. Much formalism in
nonlinear control can be used to this end, such as
Lyapunov theory, hyper-stability theory, and passivity
theory. Although the application of one formalism may be
more convenient than that of another, the results are
often equivalent.

14

ADAPTIVE CONTROL

SELF-TUNING CONTROLLERS (STC)


In non-adaptive control design (e.g., pole placement), one
computes the parameters of the controllers from those of
the plant. If the plant parameters are not known, it is
intuitively reasonable to replace them by their estimated
values, as provided by a parameter estimator. A controller
thus obtained by coupling a controller with an online
(recursive) parameter estimator is called a self-tuning
controller. Figure 8.6 illustrates the schematic structure of
such an adaptive controller. Thus, a self-tuning controller
is a controller which performs simultaneous identification
of the unknown plant.
The operation of a self-tuning controller is as follows: at
each time instant, the estimator sends to the controller a
set of estimated plant parameters which is computed
based on the past plant input u and output y; the
computer finds the corresponding controller parameters,
and then computes a control input u based on the
controller parameters and measured signals; this control
input u causes a new plant output to be generated, and
the whole cycle of parameter and input updates is
repeated. Note that the controller parameters are
computed from the estimates of the plant parameters as
if they were the true plant parameters. This idea is often
called the certainty equivalence principle.

Figure 4 A Self Tuning Control System


15

ADAPTIVE CONTROL
Parameter estimation can be understood simply as the
process of finding a set of parameters that fits the
available input-output data from a plant. This is different
from parameter adaptation in MRAC systems, where the
parameters are adjusted so that the tracking errors
converge to zero. For linear plants, many techniques are
available to estimate the unknown parameters of the
plant. The most popular one is the least squares method
and its extensions. There are also many control
techniques for linear plants, such as pole-placement, PID,
LQR (linear quadratic control), minimum variance control,
or H designs. By coupling different control and
estimation schemes, one can obtain a variety of selftuning regulators. The self-tuning method can also be
applied to some nonlinear systems without any
conceptual difference.
In the basic approach to self-tuning control, one estimates
the plant parameters and then computes the controller
parameters. Such a scheme is often called indirect
adaptive control, because of the need to translate the
estimated parameters into controller parameters. It is
possible to eliminate this part of the computation. To do
this, one notes that the control law parameters and plant
parameters are related to each other for a specific control
method. This implies that we may re-parameterize the
plant model using controller parameters (which are also
unknown, of course), and then use standard estimation
techniques on such a model. Since no translation is
needed in this scheme, it is called a direct adaptive
control scheme. In MRAC systems, one can similarly
consider direct and indirect ways of updating the
controller parameters.

16

ADAPTIVE CONTROL

17

ADAPTIVE CONTROL

ADAPTIVE CONTROLLER: THE


DESIGN
UNDERLYING DESIGN PROCESS:
To determine a controller satisfying some design
criteria under fixed environment and process.
To find a method of adjusting the CONTROLLER when
the characteristics of the process and its
environment are unknown and changing.
DESIGN PROCESS:
1. Characterize the desired behavior of the closed
loop system.
2. Determine a suitable control law with adjustable
parameters.
3. Find a mechanism of adjusting the parameters.
4. Implement the control law.

18

ADAPTIVE CONTROL

ERROR DYNAMICS
x (t)

x ' =f ( x )

Let
be the solution of
The nominal motion trajectory corresponding to initial
x (0)
conditions
x ( 0 )=x 0+ x 0

Perturb the initial condition:

Study the stability of the motion error:


The error dynamics:
(non-autonomous!)

e ( t )=x ( t ) x (t)

e ' =f ( x ( t )+ e ( t ) )f ( x ( t ) )=g(e , t)
e ( 0 )=x

Conclusion: Instead of studying stability of the nominal


motion, study stability of the error dynamics w.r.t. the
origin.

Figure 5 Nominal and perturbed Motion

19

ADAPTIVE CONTROL

CONCEPT OF STABILITY
Nonlinear systems may have much more complex and
exotic behavior than linear systems; the mere notion of
stability is not enough to describe the essential features
of their motion.

Figure 6 stable and unstable trajectories

Lyapunov Stability:
The equilibrium state x = 0 is said to be stable if, for any
R>0 , there exists r>0, such that if ||x(0)|| < r, then ||
x(f)|| <R for all t>0 . Otherwise, the equilibrium point is
unstable. Essentially, stability (also called stability in the
sense of Lyapunov or Lyapunov stability) means that the
system trajectory can be kept arbitrarily close to the
origin by starting sufficiently close to it. More formally,
the definition states that the origin is stable, if, given that
20

ADAPTIVE CONTROL
we do not want the state trajectory x(f) to get out of a
ball of arbitrarily specified radius BR , a value r(R) can be
found such that starting the state from within the ball BR.
at time 0 guarantees that the state will stay within the
ball BR thereafter.

Asymptotic Stability:
In many engineering applications, Lyapunov stability is
not enough. For example, when a satellite's attitude is
disturbed from its nominal position, we not only want the
satellite to maintain its attitude in a range determined by
the magnitude of the disturbance, i.e., Lyapunov stability,
but also require that the attitude gradually go back to its
original value. This type of engineering requirement is
captured by the concept of asymptotic stability.
Definition: An equilibrium point 0 is asymptotically
stable if it is stable, and if in addition there exists some r
> 0 such that || x(0) || < r implies that x(t) > 0 as t >

Exponential Stability:
An equilibrium point 0 is exponentially stable if there
exist two strictly positive numbers a and X such that

t >0
||x(t)|| a||x(0)||e-t
in some ball BR around the origin.

21

ADAPTIVE CONTROL

LOCAL AND GLOBAL STABILITY


The above definitions are formulated to characterize the
local behavior of systems, i.e., how the state evolves
after starting near the equilibrium point. Local properties
tell little about how the system will behave when the
initial state is some distance away from the equilibrium.
If asymptotic (or exponential) stability holds for any initial
states, the equilibrium point is said to be asymptotically
(or exponentially) stable in the large. It is also called
globally asymptotically (or exponentially) stable.
Linear time-invariant systems are either asymptotically
stable, or marginally stable, or unstable, as can be be
seen from the modal decomposition of linear system
solutions; linear asymptotic stability is always global and
exponential, and linear instability always implies
exponential blow-up. This explains why the refined
notions of stability introduced here were not previously
encountered in the study of linear systems. They are
explicitly needed only for nonlinear systems.

Linearization and Local Stability:


Lyapunov's linearization method is concerned with the
local stability of a nonlinear system. It is a formalization
of the intuition that a nonlinear system should behave
similarly to its linearized approximation for small range
motions. Because all physical systems are inherently
nonlinear, Lyapunov's linearization method serves as the
fundamental justification of using linear control
techniques in practice, i.e., shows that stable design by
22

ADAPTIVE CONTROL
linear control guarantees the stability of the original
physical system locally.

Lyapunov's linearization method:


The linearized system is strictly stable (i.e, if all
eigenvalues of A are strictly in the left-half complex
plane), then the equilibrium point is asymptotically
stable (for the actual nonlinear system).
If the linearized system is unstable (i.e, if at least one
eigenvalue of A is strictly in the right-half complex
plane), then the equilibrium point is unstable (for the
nonlinear system).
If the linearized system is marginally stable (i.e., all
eigenvalues of A are in the left-half complex plane,
but at least one of them is on the /co axis), then one
cannot
conclude
anything
from
the
linear
approximation (the equilibrium point may be stable,
asymptotically stable, or unstable for the nonlinear
system).

Lyapunov's Direct Method:


The basic philosophy of Lyapunov's direct method is the
mathematical extension of a fundamental physical
observation: if the total energy of a mechanical (or
electrical) system is continuously dissipated, then the
system, whether linear or nonlinear, must eventually
settle down to an equilibrium point. Thus, we may
conclude the stability of a system by examining the
variation of a single scalar function.
For example, lets take a mass damper problem
Figure 7 A nonlinear mass-damper spring
23

ADAPTIVE CONTROL

The total mechanical energy of the system is the sum of


its kinetic energy and its potential energy.

Comparing the definitions of stability and mechanical


energy, one can easily see some relations between the
mechanical energy and the stability concepts described
earlier:
zero energy corresponds to the equilibrium point (x
= 0, x = 0)
asymptotic stability implies the convergence of
mechanical energy to zero instability is related to the
growth of mechanical energy
These relations indicate that the value of a scalar
quantity, the mechanical energy, indirectly reflects the
magnitude of the state vector; and furthermore, that the
stability properties of the system can be characterized by
the variation of the mechanical energy of the system.
The rate of energy variation during the system's motion

This equation implies that the energy of the system,


starting from some initial value, is continuously dissipated
24

ADAPTIVE CONTROL
by the damper until the mass settles down, i.e., until x =
0.
Physically, it is easy to see that the mass must finally
settle down at the natural length of the spring, because it
is subjected to a non-zero spring force at any position
other than the natural length.
The direct method of Lyapunov is based on a
generalization of the concepts in the above mass-springdamper system to more complex systems. Faced with a
set of nonlinear differential equations, the basic
procedure of Lyapunov's direct method is to generate a
scalar "energy-like" function for the dynamic system, and
examine the time variation of that scalar function. In this
way, conclusions may be drawn on the stability of the set
of differential equations without using the difficult
stability definitions or requiring explicit knowledge of
solutions.

Positive Definite Functions and Lyapunov


Functions:
The energy function in 1st equation has two properties.
The first is a property of the function itself: it is strictly
positive unless both state variables x and x are zero. The
second property is a property associated with the
dynamics 2nd equation: the function is monotonically
decreasing when the variables x and x varies accordingly.
In Lyapunov's direct method, the first property is
formalized by the notion of positive definite functions,
and the second is formalized by the so-called Lyapunov
functions.

Positive definite function:


25

ADAPTIVE CONTROL
A scalar continuous function V(x) is said to be locally
positive definite if V(0) = 0 and, in a ball BR
x 0 => V(x) > 0
if V(0) = 0 and the above property holds over the whole
state space, then V{x) is said to be globally positive
definite.
For example

which is the mechanical energy of the pendulum, is


locally positive definite. The mechanical energy of the

nonlinear mass-damper-spring system is globally positive


definite. Note that, for that system, the kinetic energy
(1/2)mx2 is not positive definite by itself, because it can
equal zero for non-zero values of X.
Figure 8 typical shape of a positive definite function

If, in a ball BR, the function V(x) is positive definite and


has continuous partial derivatives, and if its time

26

ADAPTIVE CONTROL
derivative along any state trajectory of system is
negative semi-definite, i.e.,
V(x) < 0
then V(x) is said to be a Lyapunov function for the system

27

ADAPTIVE CONTROL

PARAMETER ESTIMATION
1)Indirect
estimate plant parameters
compute controller parameters
relies on convergence of the estimated parameters
to their true unknown values
2)Direct
no plant parameter estimation
estimate controller parameters (gains) only
3)MRAC and STC can be designed using both Direct and
Indirect approaches

CONCLUSION
28

ADAPTIVE CONTROL

So it is clear that the technique that makes it possible to


transform the idea of Adaptive control into a reality are
present. However with every techniques there are strict
conditions that are to be carefully dealt with and that
makes the practice of Adaptive Control extremely difficult
to implement. However it is also possible that the control
objective with these techniques may turn out to be overly
ambitious of us to implement. At these moments
mathematics and engineering gets torn apart from each
other. But then also it is possible to provide the required
condition and modifications to be made because of the
rich amount of different techniques.

29

ADAPTIVE CONTROL

REFERENCES
IEEE_Workshop_Slides_Lavretsky
J-J. E. Slotine and W. Li, Applied Nonlinear Control,
Prentice-Hall, New Jersey, 1991
S. Haykin, Neural Networks: A Comprehensive
Foundation, 2ndedition, Prentice-Hall, New Jersey,
1999
H. K., Khalil, Nonlinear Systems, 2ndedition, PrenticeHall, New Jersey, 2002

30

Potrebbero piacerti anche