Sei sulla pagina 1di 73

ADVANCED CONTROL THEORY

Dr. S Ushakumari
Professor
Dept. of Electrical Engineering
CET
REFERENCES

Katsuhiko Ogata, Modern Control Engineering, Fourth


edition, Pearson Education, New Delhi, 2002.
Norman S. Nise, Control Systems Engineering, 5th Edition,
Wiley Eastern, 2007.
Nagarath I. J and Gopal M, Control System Engineering,
Wiley Eastern, New Delhi.
Gopal M, Modern Control System Theory, Wiley Eastern
Ltd., New Delhi.
Hassan K Khalil, Nonlinear Systems, Prentice - Hall
International (UK), 2002.
CLASSICAL CONTROL THEORY
 Classical control theory is a branch of control theory that
deals with the behaviour of dynamical systems with
inputs, and how their behaviour is modified by feedback,
using the Laplace transform as a basic tool to model such
systems.
 Classical control theory is based on the input-output
relationship, or transfer function
 Classical control theory uses Root locus, Bode plot etc.
 These do not use any knowledge of the interior structure
of the plant.
 Limited single-input single-output (SISO) systems
DISADVANTAGES OF CLASSICAL
CONTROL SYSTEMS

Limitations

Single input single output systems


System that are linear and time invariant
Initial states are not taken into account
MODERN CONTROL THEORY
The modern trend in engineering systems is
greater complexity (due mainly to the
requirements of complex tasks and good
accuracy).

Complex systems may have multiple inputs


and multiple outputs and may be time
varying.
MODERN CONTROL THEORY Contd..

Modern control theory is a new approach to


the analysis and design of complex control
systems, has been developed since around
1960.

This new approach is based on the concept


of states.
STATE OF A SYSTEM

The state of a dynamic system is the smallest set


of variables (called state variables) such that the
knowledge of these variables at t = to, together with the
knowledge of the input for t >= to, completely
determines the behaviour of the system for any time t
>= to.

 Note that the concept of state is by no means limited


to physical systems. It is applicable to biological
systems, economic systems, social systems, and
others.
STATE VARIABLES
The state variables of a dynamic
system are the variables making up the
smallest set of variables that determine the
state of the dynamic system.
 If at least n variables x1, x2, . . . xn are needed to
completely describe the behaviour of a dynamic
system (so that once the input is given for t>= t0 to
and the initial state at t = to is specified, the future
state of the system is completely determined), then
such ‘n’ variables are called a set of state variables.

 Note that state variables need not be physically


measurable or observable quantities. Variables that
do not represent physical quantities and those that
are neither measurable nor observable can be chosen
as state variables. Such freedom in choosing state
variables is an advantage of the state-space methods.
STATE VECTOR:
If n state variables are needed to
completely describe the behaviour of a given
system, then these n state variables can be
considered the n components of a vector x.
• Such a vector is called a state vector.
• A state vector is thus a vector that determines
uniquely the system state x(t) for any time t >=
t0, once the state at t = to is given and the
input u(t) for t >= to is specified.
 state-space representation for a given system is not unique, except
that the number of state variables is the same for any of the different
state-space representations of the same system.

 The dynamic system must involve elements that memorize the values
of the input for t > t1.
STATE SPACE
The n-dimensional space whose coordinate
axes consist of the x1 axis, x2 axis, . . . , xn axis, where
x1 , x2,. . . , xn, are state variables; is called a state
space. Any state can be represented by a point in the
state space.

STATE-SPACE EQUATIONS
In state-space analysis we are concerned with
three types of variables that are involved in the
modeling of dynamic systems: input variables, output
variables, and state variables.
 Since integrators in a continuous-time control system serve as
memory devices, the outputs of such integrators can be
considered as the variables that define the internal state of
the dynamic system. Thus the outputs of integrators serve as
state variables.

 The number of state variables to completely define the


dynamics of the system is equal to the number of integrators
involved in the system.

 Assume that a multiple-input-multiple-output system involves


n integrators. Assume also that there are r inputs u1(t), u2(t), .
. . , ur(t) and m outputs y1(t), y2(t), .. . , ym(t). Define n
outputs of the integrators as state variables: x1(t), xz(t), . . . ,
x,(t) Then the system may be described by
The outputs y1(t), y2(t), .
. . , yn,(t) of the system
may be given by
State equation

Output equation
If the above equations are linearized about the operating state, then we
have the following linearized state equation and output equation:

Where,
A(t) : state matrix
B(t) : input matrix
C(t) : output matrix
D(t) : direct transmission matrix
BLOCK DIAGRAM REPRESENTATION

 If vector functions f and g do not involve time t explicitly then the


system is called a time-invariant system.
EXAMPLE: Consider the mechanical system shown
in Figure. We assume that the system is linear. The
external force u(t) is the input to the system, and the
displacement y(t) of the mass is the output.The
displacement y(t) is measured from the equilibrium
position in the absence of the external force.This
system is a single-input-single-output system.

From the diagram, the system equation is

This system is of second order. This means that the


system involves two integrators. Let us define state
variables x1(t) and x2(t) as
In a vector-matrix form,

State equation
The output equation, Equation can be written as

Output equation

in the standard form:


 Following figure is a block diagram for the system. Notice that the
outputs of the integrators are state variables.
CORRELATION BETWEEN TRANSFER
FUNCTIONS AND STATE-SPACE EQUATIONS.
 how to derive the transfer function of a single-input-single-output
system from the state-space equations.

Let us consider the system whose transfer function is given by

This system may be represented in state space by the following equations:

where x is the state vector, u is the input, and y is the output.

The Laplace transforms of Equations are given by


Since the transfer function was previously defined as the ratio of the
Laplace transform of the output to the Laplace transform of the input
when the initial conditions were zero, we set x(0)=0. Then we have
Now Y(s) becomes(substituting X(s) in Y(s))

Y(s)/U(s)=

This is the transfer-function expression of the system in terms of A,


B, C, and D.
Hence G(s) can also be written as
EXAMPLE
By substituting A, B, C, and D into G(s), we obtain

Transfer matrix
which is the transfer function of the system
STATE-SPACE REPRESENTATION OF DYNAMIC
SYSTEMS
 A dynamic system consisting of a finite number of lumped
elements may be described by ordinary differential equations in
which time is the independent variable.
 By use of vector-matrix notation, an nth-order differential equation
may be expressed by a first order vector-matrix differential
equation.
 If n elements of the vector are a set of state variables, then the
vector-matrix differential equation is a state equation.

State-Space Representation of nth-Order Systems of Linear


Differential Equations in which the Forcing Function Does Not
Involve Derivative Terms.
Consider the following nth-order system
(a)
Equation (a) can be written as
Gives the transfer function of the system
TRANSFORMATION FROM TRANSFER FUNCTION TO
STATE SPACE.

• Consider the transfer


function system

There are many (infinitely many) possible state-space representations for


this system. One possible state-space representation is
STATE-SPACE REPRESENTATIONS OF
TRANSFER-FUNCTION SYSTEMS
 Many techniques are available for obtaining state-space
representations of transfer-function systems. a few such methods
are already presented. This section presents state-space
representations in the controllable, observable, diagonal, or
Jordan canonical form.

State-Space Representation in Canonical Forms.

Consider a system defined by

where n is the input and y is the output. This equation can also be
written as
Controllable Canonical Form. The following state-space
representation is called a controllable canonical form:

 The controllable canonical form is important in


discussing the pole-placement approach to the
control systems design
Observable Canonical Form. The following state-space representation
is called an observable canonical form

 Transpose of
controllable canonical
form gives observable
canonical form
Diagonal Canonical Form.

Consider the transfer function system

Here we consider the case where the denominator polynomial involves


only distinct roots. For the distinct-roots case, Equation can be written as
The diagonal canonical form of the state-space representation of this
system is given by
Jordan Canonical Form.

consider the case where the denominator polynomial of transfer


function equation involves multiple roots. For this case, the preceding
diagonal canonical form must be modified into the Jordan canonical
form. Suppose, for example, that the𝑝𝑖 s are different from one another,
except that the first three 𝑝𝑖 s are equal, or 𝑝1 =𝑝2 =𝑝3 Then the factored
form of Y(s)/U(s) becomes

Then the Jordan canonical form representation is given by,


EXAMPLE:

Consider the system given by

Obtain state-space representations in the controllable canonical form,


observable canonical form, and diagonal canonical form.
STATE-TRANSITION MATRIX.

We can write the solution of the homogeneous state


equation
we see that the solution of Equation is simply a transformation
of the initial condition. Hence, the unique matrix is called the
statetransition matrix

(t )  e At  L1[( sI  A) 1 ]
PROPERTIES OF STATE-TRANSITION MATRICES
the important properties of the state-transition matrix

For the time-invariant system

1. (0)  I
2.  1 (t )  (t )
3. x(0)  (t ) x(t )
4. (t 2  t1 ) (t1  t0 )  (t 2  t0 )
(t )  (kt)
k
5.
proof

EXAMPLE
Obtain the state-transition matrix of the following
system:
THE SOLUTION OF THE NONHOMOGENEOUS STATE
EQUATION ( )

The Laplace transform of this equation yields


The inverse Laplace transform of this last equation can be obtained by
use of the convolution integral as follows:
EXAMPLE

Obtain the time response of the following system:


CONTROLLABILITY:

A system is said to be controllable at time t0 if it is possible by means of


an unconstrained control vector to transfer the system from any initial
state x(t0) to any other state in a finite interval of time.

OBSERVABILITY:

A system is said to be observable at time to if, with the system in state


x(to),it is possible to determine this state from the observation of the
output over a finite time interval.

 The concepts of controllability and observability were introduced by


Kalman. They play an important role in the design of control systems
in state space. In fact, the conditions of controllability and
observability may govern the existence of a complete solution to the
control system design problem.
 Although most physical systems are controllable and observable,
corresponding mathematical models may not possess the property of
controllability and observability .Then it is necessary to know the
conditions under which a system is controllable and observable.

COMPLETE STATE CONTROLLABILITY OF CONTINUOUS-


TIME SYSTEMS.

Consider the continuous-time system.


 The system described by Equation is said to be state controllable at
t = to if it is possible to construct an unconstrained control signal that
will transfer an initial state to any final state in a finite time interval
to<t<t1

 If every state is controllable, then the system is said to be completely


state controllable.

If the system is described by

be of rank n, or contain n linearly independent column vectors.


The matrix

is commonly called the controllability matrix.

EXAMPLE
CONDITION FOR COMPLETE STATE CONTROLLABILITY
IN THE S PLANE.

 The condition for complete state controllability can be stated in


terms of transfer functions or transfer matrices.

 It can be proved that a necessary and sufficient condition for


complete state controllability is that no cancellation occur in the
transfer function or transfer matrix.

 If cancellation occurs, the system cannot be controlled in the


direction of the cancelled mode.
OUTPUT CONTROLLABILITY.

In the practical design of a control system, we may want to control


the output rather than the state of the system. Complete state controllability
is neither necessary nor sufficient for controlling the output of the system.
For this reason, it is desirable to define separately complete output
controllability.
Consider the system described by

 The system described by Equation is said to be state controllable at


t = to if it is possible to construct an unconstrained control signal
that will transfer an initial state to any final state in a finite time
interval to<t<t1
If the rank of the matrix if m, then the system is completely output
controllable
Uncontrollable System. An uncontrollable system has a subsystem
that is physically disconnected from the input.

Stabilizability. For a partially controllable system, if the


uncontrollable modes are stable and the unstable modes are
controllable, the system is said to be stabilizable.
OBSERVABILITY

The system is completely observable if and only if the n × 𝑛𝑚matrix

is of rank n or has n linearly independent column vectors. This matrix


is called the observability matrix.

EXAMPLE
Since the rank of the matrix

is 2, the system is completely state controllable.

For output controllability, let us find the rank of the matrix

The rank of this matrix is 1. Hence, the system is completely output


controllable.
To test the observability condition, examine the rank of
POLE PLACEMENT

 Assume that all state variables are measurable and are available for
feedback.
 Poles of the closed-loop system may be placed at any desired
locations by means of state feedback through an appropriate state
feedback gain matrix.
 The present design technique begins with a determination of the
desired closed-loop poles based on the transient-response andior
frequency-response requirements, such as speed, damping ratio, or
bandwidth, as well as steady-state requirements.

Let us assume that we decide that the desired closed-loop


poles are to be at s = 𝜇1 , s = 𝜇2 … . . s = 𝜇𝑛 . By choosing an
appropriate gain matrix for state feedback, it is possible to force the
system to have closed-loop poles at the desired locations, provided
that the original system is completely state controllable.
Consider a linear dynamic system in the state space

choose the control signal to be


This means that the control signal u is determined by an
instantaneous state. Such a scheme is called state feedback. The 1×n
matrix K is called the state feedback gain matrix.
Then state equation becomes

where x(0) is the initial state caused by external disturbances. The


stability and transient response characteristics are determined by the
eigenvalues of matrix A - BK.

• If matrix K is chosen properly, the matrix A - BK can be made an


asymptotically stable matrix, and for all x(0) # 0, it is possible to
make x(t) approach 0 as t approaches infinity.

• The eigenvalues of matrix A - BK are called the regulator poles.


• If these regulator poles are placed in the left-half s plane, then x(t)
approaches 0 as t approaches infinity.The problem of placing the
regulator poles (closed-loop poles) at the desired location is called
a pole-placement problem.

Determination of Matrix K

a) Using Transformation Matrix T.


b) Using Direct Substitution Method.
c) Determination of Matrix K Using Ackermann's Formula
DETERMINATION OF MATRIX K USING
TRANSFORMATION MATRIX T

and the control signal is given by

Step 1: Check the controllability condition for the system. If the system
is completely state controllable, then use the following steps:

Step 2: From the characteristic polynomial for matrix A, that is,


Step 3: Determine the transformation matrix T that transforms the
system state equation into the controllable canonical form. (If the given
system equation is already in the controllable canonical form, then T =
I.) It is not necessary to write the state equation in the controllable
canonical form. All we need here is to find the matrix T. The
transformation matrix T is given by

where
Step 4: Using the desired eigenvalues (desired closed-loop poles), write
the desired characteristic polynomial:

Step 5: The required state feedback gain matrix K can be determined as


DETERMINATION OF MATRIX K USING DIRECT
SUBSTITUTION METHOD

 If the system is of low order (n <= 3), direct substitution of matrix


K into the desired characteristic polynomial may be simpler. For
example, if n = 3, then write the state feedback gain matrix K as

 Since both sides of this characteristic equation are polynomials in


s, by equating the coefficients of the like powers of s on both
sides, it is possible to determine the values of kl, k2, and k3
Prepared by
Abhijith M
M4 GNC 01

Potrebbero piacerti anche