Sei sulla pagina 1di 136

Dedan Kimathi University of Technology

BSc Mechanical Engineering

EMG 2410 Control Engineering II

January 12, 2015


EMG 2410 Control Engineering II

Prerequisites

EMG 2403 Control Engineering II

Purpose

The aim of this course is to enable the student to;

1. understand design of controllers using PIDs

2. understand design of state space representation and design of controllers

3. understand the various non linearities in control systems

4. understand the concept of optimal control systems

Learning Outcomes

At the end of this course, the student should be able to;

1. Design and implement PI,PD and PID controllers

2. Analyse, design and implement controllers based on state space approach

3. Analyse non linearities in control systems

4. Perform optimization of control systems

Course Outline

Controllers: Basic control actions, automatic controllers, actuators, and sensors. Design using
various control actions: Design specifications, controller conflagrations. Proportional (P) control
action, Derivative (D) control action, lntegral (I) control action, Proportional plus Derivative (PD)
control action, Proportional plus Integral (PI) control action. Design with the PID controllers.

State-space: State variable feedback controller design; controllability, observability, eigenvalue


placement, observe design for linear systems.

Introduction to nonlinear control systems: Sources of nonlinearity, mathematical description of


nonlinear systems. Systems with random inputs.

Introduction to optimal and adaptive control formulations. strategies, Lagrange multipliers, Ricatti
equation, maximum principle

i
Teaching Methodology
2 hour lectures and 1 hour tutorial per week, and at least five 3-hour laboratory sessions per
semester organized on a rotational basis.

Instructional Materials/Equipment

1. Control Engineering laboratories;

2. Computer laboratory;

3. Overhead projector;

Prescribed Text Books

1. Ogata K. (1996) Modern Control Engineering, Prentice Hall, 3rd Ed.

2. Shankar Sastry, Nonlinear Systems: Analysis, Stability, and Control, Springer, 1999 (SS)

3. Hassan K. Khalil, Nonlinear Systems, 3rd Ed., Prentice Hall, 2002 (HK)

4. Donald E. Kirk, Optimal Control Theory, Dover publications, 2004

5. Naidu D. S., Optimal Control Systems, CRC press, 2003

References

1. Gene F. (2005) Feedback Control of Dynamic Systems, Prentice Hall, 5th Ed.

2. Hans P. Geering, Optimal Control with Engineering Applications, Springer, 2007

3. Journal of Dynamic Systems, Measurement, and Control

ii
Contents
1 P, PI, PD and PID Controllers 1
1.1 Proportional (P) Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Integral Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3 Derivative Control Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4 Proportional and Integral control (PI) . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5 Proportional and Derivative control . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.6 Tuning of PID Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.7 Electronic P, PI, PD and PID Controllers . . . . . . . . . . . . . . . . . . . . . . . . 23

2 State space representation 28


2.1 State Equations from Transfer Functions . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.2 Similarity Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.3 Eigen Values and Eigen Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.4 Repeated Eigen values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.5 Laplace Transform Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.6 Transfer Function from state space equations . . . . . . . . . . . . . . . . . . . . . . 46
2.7 Time Solution of state equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.8 Controllability and Observability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
2.9 Pole Placement Design Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2.10 State Observer Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

3 Non-Linear Control 72
3.1 Equilibrium Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.2 Linearization of non linear state space model and local stability . . . . . . . . . . . . 76
3.3 Describing functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3.4 Phase plane Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.5 Lyapunov Stability Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

4 Optimal Control 108


4.1 Performance Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.2 Solution to the Optimal Control problem . . . . . . . . . . . . . . . . . . . . . . . . 109
4.3 Linear Quadratic Optimal Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

iii
PID Controllers Lecture Notes by A. M. Muhia

1 P, PI, PD and PID Controllers

1.1 Proportional (P) Control

The control action or signal u (t) is proportional to the error signal e (t) i.e

u (t) = kp e (t) (1)

where kp is the proportional gain constant

Figure 1: General Proportional Controller

The controller is an amplifier with a gain kp (the gain maybe adjustable)


Proportional control is the basis of root-locus design
Taking the Laplace transform of the equation 1

U (s) = kp E (s) (2)

U (s)
⇒ kp =
E (s)

The closed loop transfer function is given by

Y (s) kp G (s)
= (3)
R (s) 1 + kp G (s)

The characteristic equation of the closed-loop system is

1 + kp G (s) = 0 (4)

The gain kp in equation 1.4 is varied to generate the root-locus.


Example 1.1
Construct the root-locus for the first order control system shown below
open loop transfer function

1
PID Controllers Lecture Notes by A. M. Muhia

Figure 2: Example 1.1

k
G (s) =
Ts
characteristic equation

1 + G (s) H (s) = 0

k
1+ =0
Ts

Ts + k = 0

−k
s=
T
where k is varied from zero to ∞, the locus commences at the open-loop pole s = 0 and terminates
at minus infinity on the real axis.
Example 1.2
Construct the root locus for the 2nd order control system below

Figure 3: Example 1.2

open loop transfer function

k
G (s) H (s) =
s (s + 4)

2
PID Controllers Lecture Notes by A. M. Muhia

s = 0, −4

open loop zeros: none


characteristic equation

1 + G (s) H (s) = 0

k
1+ =0
s (s + 4)

s2 + 4s + k = 0

for different values of k the roots can be obtained as


k characteristic equation roots
0 s2 + 4s = 0 s = 0, −4
4 s2 + 4s + 4 = 0 s = −2 ± j0
8 s2 + 4s + 8 = 0 s = −2 ± j2
16 s2 + 4s + 16 = 0 s = −2 ± j3.46
The loci commences at the open-loop s = 0, −4 when k = 0. At k = 4 they branch into the complex
space

Proportional Control of type 0, first order system

Consider the system below where k and c are constants

Figure 4: Proportional Control of type 0, first order system

The closed loop transfer function of the system is given by


Y (s) kp
=
R (s) ks + c + kp

kp
⇒ Y (s) = R (s)
ks + c + kp
1
If the input is a unit step, then R (s) = the steady state value of Y (s) is given by
s

3
PID Controllers Lecture Notes by A. M. Muhia

k 1
Yss = limS→0 s
ks + c + kp s

kp
= <1
c + kp

kp
Yss =
c + kp

1
=
1 + c/kp

Yssk →∞ = 1
p

It can be seen that Yss → 1 as kp → ∞


The step response of the closed loop system is as shown below

Figure 5: Response of Proportional Control of type 0, first order system

The difference between Yss and the ideal value is known as the steady state error ess
The steady state error can be reduced by increasing the gain kp and can be eliminated completely
by increasing kp to infinity
In practical control systems, we cannot increase kp indefinitely as this

• could make the system unstable

• may be expensive (a high gain amplifier which is expensive to design and consumes more
power)

• may not be physically possible ; Amplifier have saturation limits and infinite gain is not
practically realizable

• amplifier noise

4
PID Controllers Lecture Notes by A. M. Muhia

The time constant of a gain system gives an estimate of the speed of response of a system
The open loop time constant of the system is
1
ks + c

τ = kc
for the closed loop system
k
τ=
c + kp
since k , c , and kp are positive constants,
k k
<
c + kp c
therefore increasing kp reduces the time constant and this makes the system faster.

Proportional Control of a type 0, second order

Consider the system shown below

Figure 6: Proportional Control of type 0, second order system

open loop transfer function

kp
G (s) = 2
s + as + b
closed loop transfer function

Y (s) G (s)
=
R (s) 1 + G (s)

kp
= 2
s + as + b + kp

Steady state value of Y (s) for a unit step input is

kp 1
Yss = lim s 2
s→0 s + as + b + kp s

5
PID Controllers Lecture Notes by A. M. Muhia

kp
= <1
b + kp

Proportional Control of a type 1, second order system

Consider the system shown below

Figure 7: Proportional Control of type 1, second order system

The open loop transfer function is given by

kp
G (s) =
ks2 + cs
The closed loop transfer function is given by

Y (s) kp
= 2
R (s) ks + cs + kp

Steady state value of Y (s) for a unit step input

kp 1
Yss = lim s
s→0 ks2 + cs + kp s

kp
= =1
kp

Hence the set point is attained


SUMMARY
In general for proportional control

• The output has a non zero steady state error to step inputs if the plant is type 0.
The steady state error can be reduced by increasing the gain kp when possible

• Increasing the gain kp reduces the time constant making the system faster

Example 1.3
Consider the system below where k = 2 , c = 3. Determine

6
PID Controllers Lecture Notes by A. M. Muhia

Figure 8: Example 1.3

1. The value of kp to give a maximum steady state error of 0.2 for a unit step input

2. The time constant of the resulting system

Solution
kp
Yss =
c + kp

ess = 1 − Yss

kp
=1−
c + kp

c
=
c + kp

3
= 0.2
3 + kp

kp = 12

k
τ=
c + kp

2
=
15
Example 1.4
Consider the system below. Given that k = 10 , c = 5 ,determine kp such that the dumping factor
of the closed loop system ξ = 0.7071 and compute the time constant τ of the resulting closed loop
system
closed loop transfer function

Y (s) kp
= 2
R (s) ks + cs + kp

kp/k
= 2 c (5)
s + k s + kp/k

7
PID Controllers Lecture Notes by A. M. Muhia

Figure 9: Example 1.4

The standard second order system is given by

Y (s) ωn2
= 2 (6)
R (s) s + 2ξωn s + ωn2

where
ωn - undumped natural frequency
ξ - damping factor
comparing the characteristic equation (i) and (ii)
r
2 kp kp
ωn = ωn =
k k

c
2ξωn =
k

1 c
ξ=
2ωn k

solving for kp
s
1 k c
0.7071 = ×
2 kp k

1 k c2
0.70712 =
4 kp k 2

c2
0.70712 = but c = 5 , k = 10
4kkp

25
kp =
40 × 0.70712

= 1.25

1
Time constant τ =
ξωn

8
PID Controllers Lecture Notes by A. M. Muhia

r r
kp 1.25
ωn = =
k 10

= 0.3536

1
=
0.7071 ∗ 0.3536

τ =4

1.2 Integral Control

For integral control, the control signal u (t) is given by

ˆ t
u (t) = ki e (t) dt (1)
0

where ki is the integral gain constant


converting equation 1 to frequency domain

k
U (s) = i E (s)
s

Integral Control of type 0, first order system

Consider the system shown below. It is assumed that k and c are position constants

Figure 10: Integral Control of type 0, first order system

The open loop transfer function is

ki
s (ks + c)
The closed loop transfer function is obtained as

Y (s) k
= 2 i
R (s) ks + cs + ki

9
PID Controllers Lecture Notes by A. M. Muhia

ki
= k (2)
c k
s + s+ i
2
k k
The integral control action increases the type of system by 1 and the order of the system by 1
If the input is unit step input, the steady state value of Y (s) becomes

ki
k 1
Yss = lim s
s→0 c k s
s2 + s + i
k k

=1

The steady state error is zero


Comparing (2) with the general equation of 2nd order system

Y (s) ωn2
= 2
R (s) s + 2ξωn s + ωn2
r
ki
⇒ ωn =
k

c
⇒ 2ξωn =
k

c
ξ=
2ωn k

c
= √
2 k1 k

This implies that it is not possible to vary ωn and ξ independently. Hence the integral control action
on its own does not give the designer the flexibility of setting the values of ωn and ξ independently
since it has only one design parameter ki
SUMMARY
Integral control adds a pole at the origin and this increases the type of the system by 1
It improves the steady state performance of a system.

NB: When used with a type 0 system, the steady state error is eliminated.
Adding a pole at the origin pulls the root-locus to the right, thus lowering the stability margin of
the system. In some cases integral control could even make a system unstable.

10
PID Controllers Lecture Notes by A. M. Muhia

Example 1.5

Type 1 second order system

Figure 11: Integral Control of type 1, second order system

open loop transfer function

ki
ki
G (s) = 2 =  k c
s (ks + c) 2
s s+
k
closed loop transfer function

ki
= k
c k
s + s+ i
2
k k
open loop poles

 c
s2 s + =0
k
−c
s1 = 0 s2 = 0 s3 =
k

Figure 12: Root Locus for Integral Control of type 1, second order system

11
PID Controllers Lecture Notes by A. M. Muhia

For these reasons integral control is normally not used on its own, but is combined with other laws.

1.3 Derivative Control Action

The control signal u (t)is given by

d
u (t) = kd e (t)
dt
where kd is the derivative gain
when the slope of e (t) is large at the current time , the magnitude of u (t)will increase i.e the
derivative control law provides a large corrective action in anticipation before the error gets large.
This is the main advantage of derivative control.
However it has some shortcomings.
If the error is constant, there is no corrective action taken even if the error is large i.e it cannot
bring a constant or slowly varying error to zero by itself →, allows drift.
In frequency domain
U (s) = skd E (s)

U s = sk
⇒ E(s) d
The gain increases with frequency and this has the effect of amplifying high frequency noise
The gain curve has an infinite gain at infinite frequency for an ideal derivative controller. This is
physically unrealizable
For this reasons the derivative control action is not used by itself in control systems but usually it
is combined with P or PI controls.

1.4 Proportional and Integral control (PI)

The control signal u (t) is given by

ˆ t
u (t) = kp e (t) + kI e (t) dt
0

ˆ
kp t
= kp e (t) + e (t) dt
TI 0

where
kp
= kI
TI

 ˆ t 
1
⇒ u (t) = kp e (t) + e (t) dt (1)
TI 0
kp - proportional gain

12
PID Controllers Lecture Notes by A. M. Muhia

kI - Integral gain
TI - Integral time
Taking Laplace transform of equation (1)
 
1
U (s) = kp 1 + E (s)
sTI

PI control gives two design parameters kp and kI or (kp and TI ).


k
The controller adds a pole at the origin and a zero at s = − I to the open loop transfer function
kp
The type of the system is increased by one

PI Control of type 0, first order system

Consider the system shown below

Figure 13: Proportional plus Integral Control of type 0, first order system

Open loop transfer function

kp s + ki
G (s) =
ks2 + cs
Closed loop transfer function

Y (s) kp s + ki
= 2 
R (s) ks + c + kp s + ki

kp s/k + ki/k
= (2)
( p )s ki
c+k
s2 + k + k
For a unit step input

kp s/k
+ ki/k 1
Yss = lim s
s→0 ( c+k p ) s k s
s2 + k + ki

=1

13
PID Controllers Lecture Notes by A. M. Muhia

Hence the set point is attained


Comparing the characteristic equation of (2) and that of general second order system, then
r
ki
ωn =
k

c + kp
ξ= √
2 kki
This shows that it is possible to vary ωn and ξ independently
Example 1.6

Consider the system above with k = 1 , c = 1 and time constant τ = 0.2. Determine the PI
controller gains such that ξ = 0.7071

1
τ=
ξωn

1 1
ωn = =
ξτ 0.7071 × 0.2

= 7.071
q
ki
ωn = k

k
ωn2 = 2i
k

q
ki = ωn2 k 2

= 50

c + kp
ξ= √
2 kki

p
kp = 2ξ kki − c

p
kp = 2ξ kki − c

= 2 × 0.7071 × 7.071 − 1

14
PID Controllers Lecture Notes by A. M. Muhia

=9

In general PI controller increases the type of system by one and therefore improves the steady state
performance of a system
However, it adds a pole at the origin which pulls the root locus to the right , thus degrading the
transient response of the system
Additional pole at the origin also lowers the stability margin of the system and for this reason , it
is not used with type 1 or higher systems as such systems could become unstable.

1.5 Proportional and Derivative control

The control signal u (t)is given by

d
u (t) = kp e (t) + kD e (t) (1)
dt
Taking Laplace transform of (1)

U (s) = kp E (s) + skD E (s)

 
= kp + skD E (s)

= kp [1 + sTD ] E (s)

where
kp TD = kD

PD Control of type 0, first order system

Consider the system below where k and c are positive constants

Figure 14: Proportional plus Derivative Control of type 0, first order system

15
PID Controllers Lecture Notes by A. M. Muhia

The closed loop transfer function of the system is given by

Y (s) skD + kp
= 
R (s) (kD + k) s + c + kp
For a unit step input

kp
Yss = <1
c + kp

The output has a non-zero steady state error .


PD control does not improve the steady state performance of a system.
In general PD control increases the damping of the system and this reduces the overshoots. The
D component of the controller also makes the system faster. Reduce settling time and improve the
transient response
PD control adds a zero to the system
pulls root-locus to the left - improves the stability margin of the system

Proportional plus Integral plus Derivative (PID) Control


The control signal u (t) is given by

ˆ t
d
u (t) = kp e (t) + kI e (t) dt + kD e (t) (2)
0 dt

Taking Laplace transform of (1)

k
U (s) = kp E (s) + I E (s) + skD E (s)
s
 
kI
= E (s) kp + + skD
s
 
1
= kp E (s) 1 + + sTD
sTI

 2 
s TI TD + sTI + 1
= kp E (s)
sTI
PID adds two zeros and a pole to the system.
It increases the system type by 1
PID controller is used so as to utilize the best properties of PI and PID controller. In particular ,
PD.
Improves the transient response by increasing damping resulting in smaller overshoots, and improv-
ing speed of response

16
PID Controllers Lecture Notes by A. M. Muhia

Improves the steady state performance


However, PID controller requires more components to implement and therefore is the most expensive

Example 1.7
A PID controller is inserted in series with a system having a transfer function
10
G (s) =
(s + 1) (s + 2)
The system has unity feedback. Find the gain constant of the PID controller required to locate the
closed loop poles at s = −50 , s = −4 ± j5
Solution
The system is shown below

Figure 15: Example 1.7

closed loop transfer function

10kp s2 TI TD + sTI + 1

Y (s)
= 3  
R (s) s TI + 3TI + 10kp T1 TD s2 + 2TI + 10kp TI s + 10kp

10kp
s2 TI TD + sTI + 1

TI
= 10k
s3 + 3 + 10kp TD s2 + 2 + 10kp s + T p
 
I

Characteristic equation of closed loop system is

10kp
s3 + 3 + 10kp TD s2 + 2 + 10kp s +
 
(3)
TI
For closed loop poles s = −50 , s = −4 ± j5 the characteristic polynomial is

(s − 50) (s + 4 + j5) (s + 4 − j5)

= s3 + 58s2 + 441s + 2050 = 0 (4)

Equating coefficient of equal powers of polynomial (2) and (3)

17
PID Controllers Lecture Notes by A. M. Muhia

kp = 43.9

TI = 0.214

TD = 0.125
Example
1
G (s) =
(s + 1) (s + 2)

Desired poles s = −2 ± j2 3

1.6 Tuning of PID Controllers

Ziegler Nicholas Rules

Tuning is the process of selecting the controller parameters to meet given performance specifica-
tions. Ziegler and Nicholas suggested rules for tuning PID controllers based on experimental step
response or based on the value of kp that results in marginal stability when only the proportional
control action is used

Ziegler Nicholas rules are used to select PID coefficient such that the step response of the resulting
system has a quarter decay ratio, which is satisfactory for many systems.

The concept of a quarter decay ratio is illustrated in the figure below

Figure 16: Quarter Decay Ratio

A quarter decay ratio means that


b c 1
= = etc
a b 4
Ziegler and Nicholas proposed two methods

18
PID Controllers Lecture Notes by A. M. Muhia

The Ultimate Cycle method

The set-up below is used

Figure 17: Ultimate Cycle method

The proportional controller has an adjustable gain.


The gain of the proportional controller is increased until the plant exhibits sustained oscillations
NB: If the system does not exhibit sustained oscillations, whatever the value proportional gain
may have , then this method does not apply.

The period of oscillation pu (known as ultimate period) and the gain that causes the sustained
oscillations ku (known as ultimate gain) are determined

Figure 18: Ultimate Cycle sustained oscillations

Ziegler and Nicholas proposed the following coefficients for PID control
Types of controller Optimum gain
kp TI TD
P 0.5ku - -
PI 0.45ku 0.833pu -
PD 0.6ku - 0.125pu
PID 0.6ku 0.5pu 0.125pu

19
PID Controllers Lecture Notes by A. M. Muhia

ku - ultimate gain
pu - ultimate period

The Process Reaction method

For the process reaction method, the open loop step response of the system is obtained

Figure 19: Process Reaction method

The open loop response is shown above. It is S-shaped.

NB: If an open loop step response is not S-shaped, this method does not apply
The parameters to be determined from the step response are the delay time L and the gradient at
the steepest point.

To obtain the delay time L a tangent is drawn at the steepest point of the step response and
extended to the time t axis as shown below

Figure 20: Process Reaction tuning

The gradient at the steepest point is given by


k
R=
T
For the approach, Ziegler and Nicholas proposed the following coefficients for PID control

20
PID Controllers Lecture Notes by A. M. Muhia

Types of controllers Optimum gain


kp TI TD

1
P - -
LR

0.9 L
PI -
LR 0.3

1.2
PID 2L 0.5L
LR
Example 1.8
Use the ultimate cycle method to determine the gain for P, PI and PID controllers for the following
plant
1 1
G (s) = = 3
(s + 1) (s + 2) (s + 3) s + 6s2 + 11s + 6
Solution
A proportional controller is first used
The closed loop transfer function for the given system with a proportional controller is
kp
s3 + 6s2 + 11s + 6 + kp
The characteristic equation is
s3 + 6s2 + 11s + 6 + kp = 0

Setting s = jω and kp = ku

(jω)3 + 6 (jω)2 + 11 (jω) + 6 + ku = 0

−jω 3 − 6ω 2 + 11jω + 6 + ku = 0

Equating imaginary parts to 0


−jω 3 + 11jω = 0

ω 2 = 11


ω = ± 11


ultimate period pu =
ω


pu = √ = 1.89
11

Equating real parts

21
PID Controllers Lecture Notes by A. M. Muhia

−6ω 2 + 6 + ku = 0

ku = 6ω 2 − 6

= 6 × 11 − 6

= 60

If P control is to be used with the given system then the gain kp should be set to

kp = 0.5ku

= 0.5 × 60

= 30

PI control

kp = 0.45ku

= 0.45 × 60

= 27

TI = 0.8133 × pu

= 0.8133 × 1.89

= 1.57

k
kI = Tp
I

= 17.18

PD control

kp = 0.6ku = 36

TI = 0.5pu = 0.5 × 1.89

= 0.947

22
PID Controllers Lecture Notes by A. M. Muhia

TD = 0.125 × 1.89 = 0.947

k
kI = Tp = 38.1
I

kD = TD kp = 8.52

Example
The following table gives the measured open loop response of a system to unit step input.
Use Ziegler Nicholas rules to determine the controller gains for P, PI, and PID controller for the
system
t sec 0.00 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0
Amplitude 0.00 0.01 0.06 0.15 0.28 0.43 0.57 0.70 0.80 0.88 0.92 0.95 0.96

1.7 Electronic P, PI, PD and PID Controllers


The Electronic P, PI, PD and PID Controllers are based on the op-amp circuit shown in the Figure
21 below where Z1 (s) and Z2 (S)are impedances

Figure 21: Basic Op-amp Circuit

V2 (s) Z2 (s)
=−
V1 (s) Z1 (s)
Proportional controller
V2 (s) R2
= = kp
V1 (s) R1
NB: R2 could be variable , making it possible to vary kp

Integral controller

V2 (s) 1 k
= = I
V1 (s) sCR1 s

23
PID Controllers Lecture Notes by A. M. Muhia

Figure 22: Electronic Proportional Controller

Figure 23: Electronic Integral Controller

ˆ t
1
v2 (t) = V1 (t) dt
R1 C 0
Derivative controller
V2 (s)
= sCR1 = skD
V1 (s)
or

dv1 (t)
v2 (t) = CR1
dt
If the differentiator is implemented as shown in Figure 24above, the circuit amplifies noise and the
op-amp immediately gets saturated
A practical differentiator is shown in Figure 25 below

24
PID Controllers Lecture Notes by A. M. Muhia

Figure 24: Ideal Electronic Derivative Controller

Figure 25: Practical Electronic Derivative Controller

V2 (s) sCR2
=
V1 (s) 1 + sCR1
This differentiator has a limited gain i.e the gain levels are at same frequency, so it does not get
saturated like the ideal one

Proportional and Derivative PD controller

The circuit is as shown in Figure 26 below

25
PID Controllers Lecture Notes by A. M. Muhia

Figure 26: Electronic Proportional plus Derivative Controller

V2 (s) R2
= [1 + sCR1 ]
V1 (s) R1
R2
⇒ kp =
R1
TD = R1 C

kD = R2 C

Proportional and Integral PD controller

Figure 27: Electronic Proportional plus Integral Controller

 
V2 (s) R2 1
= 1+
V1 (s) R1 sCR2

26
PID Controllers Lecture Notes by A. M. Muhia

R2
⇒ kp =
R1

TI = R2 C

1
KI =
R1 C
Proportional plus Integral plus Derivative (PID) Control

Figure 28: Electronic Proportional plus Integral plus Derivative Controller

V2 (s) (1 + sC1 R1 ) (1 + sC2 R2 )


=
V1 (s) sC2 R1
 
C1 R1 + C2 R2 1
= + + sC1 R2
C 2 R1 sC2 R1

C1 R1 + C2 R2
⇒ kp =
C2 R1
1
kI =
C2 R1
kD = C1 R2

Parallel form of the PID controller

The PID controller can also be implemented as shown in Figure 29 below


The P, I and D components are connected in parallel and the output of the respective parts summed
together.
This requires more components than the previous form but it gives the designer more flexibility as
the P, I and D gains can be set independently

27
PID Controllers Lecture Notes by A. M. Muhia

Figure 29: Parallel Proportional plus Integral plus Derivative Controller

2 State space representation


State space approach is a generalized time-domain method of modelling, analysing and designing
a wide range of control systems.
The approach can deal with

1. Multiple input, multiple output (MIMO) systems or multivariable systems

2. Nonlinear and time invariant systems

3. Alternative controller design approaches

The analysis require

1. Input variables

2. Output variables

3. State variables

28
PID Controllers Lecture Notes by A. M. Muhia

Definitions

State: State of dynamic system is the smallest set of variables (state variables) such that knowl-
edge of the variables at time t = to plus knowledge of the inputs for t ≥ to , this information
determines the behaviour of the system for any time t ≥ to .

State variables: These are variables making up the smallest set of variables that determine the
state of the dynamic system.Let a dynamic system have n variables x1 , x2 , . . .., xn to describe the
behavior of the system while the input is given fort ≥ to and the initial state at t = to is specified.
If the future state of the system is determined, then such variables are a set of state variables.

State vector: The behavior of a given system is described by n state variables and can be
considered to be n components of a vector x. Such a vector is called a state vector. A state is thus
a vector that determines uniquely the system state x (t) for anyt ≥ to once the state at t = to is
given and the input for t ≥ to is specified.

State space: The n dimensional space whose coordinate axis consists of x1 , x2 , . . .., xn axis is
called state space. Any state can be represented by a point in the state space.
Consider the dynamic system shown below
The above dynamic system has MIMO.
The state of the system is described by a set of first order differential equations in terms of the
state variables x1 , x2 , . . .., xn and input variables u1 , u2 , . . .., ur as

ẋ1 (t) = f1 (x1 , x2 , . . . , xn ; u1 , u2 , . . . , ur ; t)

ẋ2 (t) = f2 (x1 , x2 , . . .., xn ; u1 , u2 , . . .., ur ; t) (1)

ẋn (t) = (fn x1 , x2 , . . . , xn ; u1 , u2 , . . . , ur ; t)

The output of the system y1 (t) , y2 (t) ,. . .,ym (t) may be a function of input variables, state
variables and time. This may be described as

y1 (t) = (g1 x1 , x2 , . . . , xn ; u1 , u2 , . . . , ur ; t)
y2 (t) = g2 (x1 , x2 , . . . ., xn ; u1 , u2 , . . . ., ur ; t) (2)

ym (t) = gm (x1 , x2 , . . .., xn ; u1 , u2 , . . .., ur ; t)

29
PID Controllers Lecture Notes by A. M. Muhia

Definitions of u (t) , x (t) and y (t) would be


 
x1 (t)
 x2 (t) 
 
x (t) =  . 

 state vector
 .. 
xn (t)

 
u1 (t)
u2 (t)
 
 ..  input vector
y (t) =  
 . 
ur (t)
 
y1 (t)
 y2 (t) 
 
 ..  output vector
y (t) =  
 . 
ym (t)

Using the above vectors, equations (1) and (2) become

ẋt = f (x, u, t) state equation (3)

y (t) = g (x, u, t) output equation (4)

Time invariant systems

If the vector functions of f and g do not involve time, then the system is said to be a time invariant
system
Then
ẋt (t) = f (x, u) (5)

y (t) = g (x, u) (6)

If equations 5 and 6 are linear then

ẋt = Ax (t) + Bu (t) (7)

y (t) = Cx (t) + Du (t) (8)

where A, B, C and D are constant matrices

30
PID Controllers Lecture Notes by A. M. Muhia

Time varying systems

If the system is time varying, then equations 3 and 4 result to

ẋ (t) = A (t) x (t) + B (t) u (t) (9)

y (t) = C (t) x (t) + D (t) u (t) (10)

where
A (t) – state matrix
B (t) – input matrix
C (t) – output matrix
D (t) – direct transition matrix
Example 1.1
Obtain the state space representation of the system described by

d3 y d2 y dy
+ 6 + 11 + 6y = 6u
dt3 dt2 dt
where y is the output andu is the input to the system
Solution
Knowledge of y (0) , ẏ (0)and ÿ(0) together with the input u (t) for t ≥ 0 determines completely the
future of the system. Thus if y (t) , ẏ (t)and ÿ(t)are a set of the state variables then
Defining the state variables as

x1 = y

x2 = ẏ

x3 = ÿ

Then

ẋ1 = ẏ = x2

x2 = ẏ = x3

x3 = ÿ = −6x1 − 11x2 − 6x3 + 6u

By use of vector matrix notation, then the 3 first order differential equations can be combined into

31
PID Controllers Lecture Notes by A. M. Muhia

one as       
ẋ1 0 1 x10 0
ẋ2  =  0 0 1  x2  + 0 u
      

ẋ3 −6 −11 −6 x3 6

The output y is given by

 
h i x1
y= 1 0 0 x2 
 

x3

Example 1.2
Write the state variable formulation of the parallel RLC network shown below

input is i (t) = I sin ωt


Solution
Applying KCL at node A then
i = iR + iC + iL

ˆ
v (t) dv (t) 1
I sin ωt = +C + v (t) dt
R dt L

Differentiating with respect to t and rearranging yields

d2 v 1 dv 1 ω
+ + v = I cos ωt
dt2 RC dt LC C
Choosing
v (t) = x1 (t)

ẋ1 (t) = x2

1 1 ω
ẋ2 (t) = − x1 − x2 + I cos ωt
LC RC C
The vector-matrix differential form of the state equation can be written as

32
PID Controllers Lecture Notes by A. M. Muhia

" # " #" # " #


ẋ1 0 1 x1 0
= 1 1
+ ω
ẋ2 − LC − RC x2 C I cos ωt

and the output " #


h i x
1
v= 1 0
x2

NB
Usually in the circuit problem, the current through the inductor and voltage across the capacitor
are chosen as the state variables

2.1 State Equations from Transfer Functions

Consider a system defined by

y (n) + a1 y (n−1) + · · · + a(n−1) ẏ + an y = b0 un + b1 un−1 + · · · + bn−1 u̇ + bn u (1)

where u is the input and y is the output


The equation can be written as

Y (s) b0 sn + b1 sn−1 + · · · + bn−1 s + bn


= n (2)
U (s) s + a1 sn−1 + · · · + an−1 s + an

The state space representation of the system defined by equation (11) and (12) can be presented
in controllable canonical form, observable canonical form and diagonal canonical form

Controllable canonical form

The controllable canonical form of the state space representation is given by


    
 
ẋ1 0 1 0 ··· ··· 0 x1 0
 ẋ2   0 0 1 ··· ··· 0   x2  0
      
 ..   .. .. .. ..   ..   .. 
      
 .   . . . .   .  .
 . =  .  + . u (3)
      
 ..   .. .. .. ..  . .
   . . . .  . 
 
 .
ẋn−1   0 0 0 ··· ··· 1  xn−1  0
      

ẋn −an −an−1 −an−2 · · · · · · −a1 xn 1

and

33
PID Controllers Lecture Notes by A. M. Muhia

 
x1
 x2 
 
. 
 
h  .. 
i
y = bn − an b0 bn−1 − an−1 b0 ··· · · · b1 − a1 b0 
 .  + b0 u
 (4)
 .. 
 
x
 
 n−1 
xn

Example 1.3
Consider the system given by

Y (s) s+3
= 2
U (s) s + 3s + 2

Obtain the controllable canonical form of the state space representation


Solution
n = 2 , b0 = 0, b1 = 1, b2 = 3 ,a1 = 1, a2 = 2
" # " #" # " #
ẋ1 0 1 x1 0
= + u
ẋ2 −a2 −a1 x2 1
" #" # " #
0 1 x1 0
= + u
−2 −3 x2 1
" #
h i x
1
y = b2 − a2 b0 b1 − a1 b0 + b0 u
x2
" #
h i x
1
= 3 1
x2

Observable canonical form

For the transfer function

Y (s) b0 sn + b1 sn−1 + · · · + bn−1 s + bn


= n
U (s) s + a1 sn−1 + · · · + an−1 s + an

The observable canonical form of the state space representation is given by

34
PID Controllers Lecture Notes by A. M. Muhia

      
ẋ1 0 0 ··· ··· 0 −an x1 bn − an b0
 ẋ2  1 0 ··· ··· 0 −an−1   x2  bn−1 − an−1 b0 
      
 ..   .. .. ..   ..   ..
      

 .  . . .  .  
    . 
u
 .  = . + (5)
  
 ..   .. .. ..   ..  
    .. 
   . .  .   . 


 n−1  0 0 ··· ··· 0 −a2  xn−1   b2 − a2 b0 
      

ẋn 0 0 ··· ··· 1 −a1 xn b1 − a1 b0

and

 
x1
 x2 
 
. 
 
h  .. 
i
y= 0 0 ··· ··· 0  .  + b0 u
1   (6)
 .. 
 
xn−1 
 

xn

Example 1.4
Obtain the observable canonical form of the state space representation for the system with the
transfer function

Y (s) s+3
= 2
U (s) s + 3s + 2

Solution
n = 2 , b0 = 0 ,b1 = 1 ,b2 = 3 ,a1 = 1 , a2 = 2
" # " #" # " #
ẋ1 0 −a2 x1 b2 − a2 b0
= + u
ẋ2 1 −a1 x2 b1 − a1 b0
" #" # " #
0 −2 x1 3
= + u
1 −3 x2 1
" #
h i x
1
y= 0 1 + b0 u
x2
" #
h i x
1
= 0 1
x2

Diagonal canonical form

If the denominator polynomial has distinct roots then the transfer function can be written as

35
PID Controllers Lecture Notes by A. M. Muhia

Y (s) b0 sn + b1 sn−1 + · · · + bn−1 s + bn


= (7)
U (s) (s + p1 ) (s + p2 ) · · · (s + pn−1 ) (s + pn )

c1 c2 cn−1 cn
= b0 + + + + (8)
(s + p1 ) (s + p2 ) (s + pn−1 ) (s + pn )

the diagonal canonical form of the state space representation of the system is given by

     
ẋ1 −p1 0 ··· ··· 0 0 x1 1
 ẋ2   0 −p1 ··· ··· 0   x2  1
      
 ..   .. .. .. ..   ..   .. 
      
 .   . . . .   .  .
 . = .  .  + . u (9)
      
 ..   .. .. .. ..   ..   .. 
   . . . 
   
ẋn−1   0 0 0 ··· −pn−1 0  xn−1  1
      

ẋn 0 0 −an−2 ··· ··· −pn xn 1

and

 
x1
 x2 
 
. 
 
h  .. 
i
y = c1 c2 ··· · · · cn−1  .  + b0 u
cn   (10)
 .. 
 
xn−1 
 

xn

Example1.5
Obtain the diagonal canonical form of the state space representation for the system with the transfer
function

Y (s) s+3
= 2
U (s) s + 3s + 2

Solution

s+3 s+3 c1 c2
= = +
s2 + 3s + 2 (s + 1) (s + 2) (s + 1) (s + 2)

b0 = 0 , c1 = 2 , c2 = −1 , p1 = 1 , p2 = 2
" # " #" # " #
ẋ1 −p1 0 x1 1
= + u
ẋ2 0 −p2 x2 1
" #" # " #
−1 0 x1 1
= + u
0 −2 x2 1

36
PID Controllers Lecture Notes by A. M. Muhia

" #
h i x
1
y = c1 c2 + b0 u
x2
" #
h i x
1
= 2 −1
x2

Jordan Canonical Form

If the denominator polynomial involves multiple roots then the state space representation can be
written in Jordan canonical form.
For example, if the pi ‘s are different from one another except that the first three are equal i.e.
p1 = p2 = p3 , then the factored form of the transfer function becomes

Y (s) b0 sn + b1 sn−1 + · · · + bn−1 s + bn


= (11)
U (s) (s + p1 )3 (s + p4 ) · · · (s + pn )

The partial fraction expansion of the equation becomes

Y (s) c1 c2 c3 c4 cn
= b0 + 3
+ 2
+ + + ··· + (12)
U (s) (s + p1 ) (s + p1 ) (s + p 1 ) (s + p 4 ) (s + pn )

The state space representation of the system in Jordan canonical form becomes

 

ẋ1 −p1
 1 0 0 0    
 x1 0
−p1

   0 1 0
 ẋ2  
   
  x2  0
   ..    
 ẋ3   0 0 −p1 0 .   x  1
  3  
 =   +  u (13)
  
 ẋ4   0 ..   x4  1
 .   −p4 .    
 .   .    
 .   .. ··· 0
   
 
ẋn xn 1
0 ··· ··· ··· ··· −pn

and
 
x1
 
 x2 
h i
 . 

y = c1 ··· .
· · · cn  . 
c2  + b0 u (14)

 . 
 .. 
 
xn

The characteristic equation of a given system remains invariant under different forms of state
variable representation. This can be true for the transfer function also. The choice of the states is
not unique

37
PID Controllers Lecture Notes by A. M. Muhia

Example 1.6
Consider the system
...
y + 6ÿ + 11ẏ + 6y = 6u

where y is the output and u is the input. Obtain the state space representation of the system
Solution I
Let
x1 = y

x2 = ẏ

x3 = ÿ

ẋ1 = x2

ẋ2 = x3

ẋ3 = −6x1 − 11x2 − 6x3 + 6u

These can be written in the vector matrix differential equation form as

      
ẋ1 0 1 x1 0 0
ẋ2  =  0 0 1  x2  + 0 u
      

ẋ3 −6 −11 −6 x3 6
 
h i x1
y= 1 0 0 x2 
 

x3

The characteristic equation becomes

|SI − A| = 0

s3 + 6s2 + 11s + 6 = 0

Solution II

...
y + 6ÿ + 11ẏ + 6y = 6u

The transfer function is obtained as

38
PID Controllers Lecture Notes by A. M. Muhia

Y (s) 6
= 3 2
U (s) s + 6s + 11s + 6

6
=
(s + 1) (s + 2) (s + 3)

By partial fraction expansion

Y (s) 3 −6 3
= + +
U (s) (s + 1) (s + 2) (s + 3)

3 −6 3
Y (s) = U (s) + U (s) + U (s)
(s + 1) (s + 2) (s + 3)

Defining
Y (s) = X1 (s) + X2 (s) + X3 (s)

where

3
X1 (s) = U (s) ẋ1 = −x1 + 3u
(s + 1)

−6
X2 (s) = U (s) ẋ2 = −2x2 − 6u
(s + 2)

3
X1 (s) = U (s) ẋ3 = −3x3 + 3u
(s + 3)

The state space representation becomes

      
ẋ1 1 0 0 x1 3
ẋ2  = 0 −2 0  x2  + −6 u
      

ẋ3 0 0 −3 x3 3
 
h i x1
y= 1 1 1 x2 
 

x3

|SI − A| = 0

s3 + 6s2 + 11s + 6 = 0

39
PID Controllers Lecture Notes by A. M. Muhia

2.2 Similarity Transformation

It has been stated that the choice of states is not unique for a given system. Suppose that there
exists a set of state variables

h iT
X = x1 x2 ··· xn (1)

We may take another set of state variables

h iT
Z = z1 z ··· zn (2)

So that a linear or similarity transformation exists


Let X = P Z i.e
Z = P −1 X (3)

where P is a non-singular transformational matrix


Differentiating equation (27) yields

Ż = P −1 Ẋ (4)

Using the general state space equations

Ẋ = Ax + Bu

y = Cx

Then equation (28) becomes

Ż = P −1 Ax + P −1 Bu (5)

From equation (27) X = P Z


Equation (29) becomes

Ż = P −1 AP z + P −1 Bu (6)

and
y = Cx = CP z (7)

Equations (30) and (31) can be written as

Ż = Âz + B̂u (8)

40
PID Controllers Lecture Notes by A. M. Muhia

y = Ĉx (9)

Where  = P −1 AP, B̂ = P −1 B and Ĉ = CP


Hence similarity transformation the transformed system can be represented in the vector-matrix
differential form as
Ż = Âz + B̂u

and the output as


y = Ĉx

NB

1. The characteristic equations and hence the Eigen values of A and Âare invariant under simi-
larity transformation

2. The transfer function remains invariant under similarity transformation

2.3 Eigen Values and Eigen Vectors

Given the matrix equation

Ax = λx (1)

The values of the scalar λ for which non trivial solutions exist are called Eigen values and the
corresponding solutions x = 0 are called Eigen vectors
Equation (34) can be written in the form
(λI − Ax) = 0 whereI is the identity matrix
|λI − Ax| = 0 is the characteristic equation of A
The roots of the characteristic equation are called Eigen values of the matrix A
Corresponding to each Eigen value is a non-zero solution of x = expi This is called the Eigen vector
of A corresponding to λi
Example 1.7
" #
4 1
Determine the Eigen values and Eigen vectors of Ax = λx where A =
3 2
Solution
|λI − A| = 0

(λ − 4) (λ − 2) − 3 = 0

λ1 = 5, λ2 = 1

41
PID Controllers Lecture Notes by A. M. Muhia

For λ1 = 5 , Ax = λx becomes " #" # " #


4 1 x1 x1
=5
3 2 x2 x2

4x1 + x2 = 5x1

3x1 + 2x2 = 5x2

x1 = x2

Eigen vector corresponding toλ1 = 5 is


" # " #
x1 1
exp1 = =
x2 1
simplest form
Forλ2 = 1 , Ax = λx becomes
" #" # " #
4 1 x1 x1
=1
3 2 x2 x2

4x1 + x2 = x1

3x1 + 2x2 = x2

x1 = −3x2

Eigen vector corresponding to λ2 = 1 is


" # " #
x1 1
exp2 = =
x2 −3

simplest form
We obtain the transformation matrix P from the Eigen vectors as follows
" #
h i 1 1
P = exp1 exp2 =
1 −3

The matrix = P −1 AP is then obtained

42
PID Controllers Lecture Notes by A. M. Muhia

" #−1 " #" # " #


−1 1 1 4 1 1 1 5 0
 = P AP = =
1 −3 3 2 1 −3 0 1

Under similarity transformation the characteristic equation does not change i.e.


|λI − A| = λI − Â = λ2 + λs + 5

2.4 Repeated Eigen values

In some cases, the matrix A will have repeated Eigen values. The Eigen vectors are evaluated as
follows
Example 1.8
 
3 −3 2
Determine the Eigen values and Eigen vectors of Ax = λx where A = −1 5 −2
 

−1 3 0
Solution
|λI − A| = 0


λ − 3 3 −2

1 λ−5 2 =0


1 −3 λ

λ3 − 8λ2 + 20λ − 16 = 0

λ1 = 4 λ2 = λ3 = 2

For λ1 = 4

    
3 −3 2 x1 x1
−1 5 −2 x2  = 4 x2 
    

−1 3 0 x3 x3

This yield

x2 = −x1

and
x3 = −x1

and the simplest form of the corresponding Eigen vector as

43
PID Controllers Lecture Notes by A. M. Muhia

 
1
exp1 = −1
 

−1

For λ2 = λ3 = 2

    
3 −3 2 x1 x1
−1 5 −2 x2  = 2 x2 
    

−1 3 0 x3 x3

The simultaneous equations obtained are

x1 − 3x2 + 2x3 = 0

We let x2 = α and x3 = βwhere are α and β constants


The above equation becomes

x1 − 3α + 2β = 0

x1 = 3α − 2β

and the resulting Eigen vector as


   
3 −2
exp = α 1 + β  0 
   

0 1

 
3
For β = 0and α = 1 , exp2 = 1
 

0
 
−2
Forβ = 1 and α = 0 ,exp3 =  0 
 

1
The transformation matrix P is obtained as
 
1 3 −2
P = −1 1 0
 

−1 0 1

The matrix  = P −1 AP is then obtained as

44
PID Controllers Lecture Notes by A. M. Muhia

 −1   
1 3 −2 3 −3 2 1 3 −2
 = P −1 AP = −1 1 0 −1 5 −2 −1 1 0
    

−1 0 1 −1 3 0 −1 0 1
 
4 0 0
= 0 2 0
 

0 0 2

Checking the characteristic equation


|λI − A| = λI − Â = λ3 − 8λ2 + 20λ − 16 = 0

2.5 Laplace Transform Technique

Recall I
The Laplace transform of a function f (t) is defined as the integral
ˆ ∞
L {f (t)} = exp−st f (t) dt (1)
0

Example 1.9
Obtain the Laplace transform of f (t) = expat
Solution
ˆ ∞
L {f (t)} = exp−st expat dt
0

1
=
s−a
Recall II

n 0 o
L f (t) = sL {f (t)} − f (0) (2)

0
n o
L f ” (t) = s2 L {f (t)} − sf (0) − f (0) (3)

and so on for Laplace transform of higher derivatives(you should be able to prove this with a lot of
ease)
Example 1.10
Use the Laplace transform of second derivative to derive

45
PID Controllers Lecture Notes by A. M. Muhia

s
L {cos (at)} = 2
s + a2
Solution
Let
f (t) = cos (at)

0
f (t) = −a sin at

and

f ” (t) = −a2 cos (at)

0
f (0) = 0

f (0) = 1

Using equation (37)

0
n o
L f ” (t) = s2 L {f (t)} − sf (0) − f (0)

n o
L −a2 cos (at) = s2 L {cos (at)} − s − 0

 
s2 + a2 L {cos (at)} = s

s
L {cos (at)} = 2
s + a2

2.6 Transfer Function from state space equations


Y (s)
The transfer function is given by G (s) = U (s)
The general state space representation of any given systems is

ẋ = Ax + Bu (1)

y = Cx + Du (2)

Taking the Laplace transforms of (38) and (39)

46
PID Controllers Lecture Notes by A. M. Muhia

sX (s) − X (0) = AX (s) + BU (s) (3)

Y (s) = CX (s) + DU (s) (4)

Assuming that the initial conditions of the system are zero i.e. X (0) = 0 and rearranging (40) then

X (s) = (SI − A)−1 BU (s) (5)

Substituting (42) into (41) yields


 
Y (s) = C (SI − A)−1 B + D U (s) (6)

and the transfer function is obtained as

Y (s)  
G (s) = = C (SI − A)−1 B + D (7)
U (s)

Example 1.11
Obtain the transfer function of the system whose state space representation is given as
" #" # " #
0 1 x1 0
ẋ = + u
−2 −3 x2 1
" #
h i x
1
y= 1 0
x2

Solution

Y (s)  
G (s) = = C (SI − A)−1 B + D
U (s)
where
" # " #
0 1 0 h i
A= , B= , C= 1 0 and D = 0
−2 −3 1
" #
−1 1 s+3 1
(SI − A) = 3
s + 3s + 2 −2 s
" #" #
  1 h i s+3 1 0
C (SI − A)−1 B + D = 3 1 0
s + 3s + 2 −2 s 1

1
= 3
s + 3s + 2

47
PID Controllers Lecture Notes by A. M. Muhia

2.7 Time Solution of state equations

Consider the homogenous case


ẋ (t) = ax (t) (1)

Approach A
Assume a solution of the form

x (t) = b0 + b1 t + b2 t2 + · · · + bk tk (2)

Substituting (46) into (45) yields

h i
ẋ = b1 + 2b2 t + · · · + kbk tk−1 = ab0 + b1 t + b2 t2 + · · · + bk tk (3)

If the assumed solution is to be a true solution, equation (47) must hold true for any value of t
This implies that

b1 = ab0

1 2
b2 = a b0
2

1 k
bk = a b0
k!
The value of b0 is determined by substituting t = 0 into equation (46) i.e.

X (0) = b0

The solution of x (t) can be written as


 
1 1
x (t) = 1 + at + a2 t2 + · · · + ak tk x (0)
2 k!

But
1 1
expat = 1 + at + a2 t2 + · · · + ak tk
2 k!
Therefore

x (t) = expat x (0) (4)

Approach B
Using Laplace transform of the homogeneous equation (45)

48
PID Controllers Lecture Notes by A. M. Muhia

sX (s) − X (0) = aX (s)

1
X (s) = X (0)
s−a
Taking the inverse Laplace transform

x (t) = expat x (0)

State Transition Matrix

The approach to the solution of homogeneous scalar equation can be extended to the solution of
homogeneous state equation

ẋ (t) = Ax (t) (5)

Taking Laplace transform

sX (s) − X (0) = AX (s)

X (s) = (sI − A)−1 X (0)

Taking the inverse Laplace transform

x (t) = L−1 (sI − A)−1 x (0) (6)

x (t) = expAt x (0) (7)

Where

expAt = L−1 (sI − A)−1

expAt is called the state transition matrix and contains all information about the free motions of
the system described by (49)
Example 1.12
Obtain the state transition matrix of the following state equation
" # " #" #
ẋ1 0 1 x1
=
ẋ2 −2 −3 x2

49
PID Controllers Lecture Notes by A. M. Muhia

Solution

expAt = L−1 (sI − A)−1

" #
−1 1 s+3 1
(sI − A) = 3
s + 3s + 2 −2 s
" #
1 s+3 1
=
(s + 2) (s + 1) −2 s

s+3 1
" # " 2 1 1 1
#
(s+2)(s+1) (s+2)(s+1) (s+1)
− s+2 (s+1)
− s+2
= −2 s
= −2 2 −1 2
(s+2)(s+1) (s+2)(s+1) (s+1)
+ s+2 (s+1)
− s+2

" 2 1 1 1
#
At −1 −1 −1 (s+1)
− s+2 (s+1)
− s+2
exp =L (sI − A) =L −2 2 −1 2
(s+1)
+ s+2 (s+1)
− s+2
" #
2 exp−t − exp−2t exp−t − exp−2t
=
−2 exp−t +2 exp−2t − exp−t +2 exp−2t

Non Homogeneous state equations

Consider the non-homogeneous state equation

ẋ (t) = ax (t) + Bu (t) (8)

Multiplying equation (52) by exp−at both sides

exp−at ẋ (t) = exp−at ax (t) + exp−at Bu (t)

exp−at [ẋ (t) − ax (t)] = exp−at Bu (t)

d 
exp−at x (t) = exp−at Bu (t)

(9)
dt
Integrating equation (53) between0 and t results in

ˆ t
−at
exp x (t) = exp−at Bu (t) dt + x (0)
0

ˆ t
x (t) = expat x (0) + expat exp−aτ Bu (τ ) dτ (10)
0

50
PID Controllers Lecture Notes by A. M. Muhia

From equation (54)

• The first term on the right hand side is the response to the initial conditions

• The second term is the response to the input u (t)

Extending the same approach to the solution of homogeneous state equation yields

ˆ t
At
x (t) = exp x (0) + expA(t−τ ) Bu (τ ) dτ (11)
0

The solution of x (t) is the sum of a term consisting of the transition of the initial state and a term
arising from the input vector
Example 1.13
Obtain the time response of the following system
" # " #" # " #
ẋ1 0 1 x1 0
= + u
ẋ2 −2 −3 x2 1

where u (t) is a unit step input


Solution

ˆ t
x (t) = expAt x (0) + expA(t−τ ) Bu (τ ) dτ
0

From the previous example


" #
At 2 exp−t − exp−2t exp−t −2 exp−2t
exp =
−2 exp−t +2 exp−2t − exp−t +2 exp−2t
" #
A(t−τ ) 2 exp−(t−τ ) − exp−2(t−τ ) exp−(t−τ ) −2 exp−2(t−τ )
exp =
−2 exp−(t−τ ) +2 exp−2(t−τ ) − exp−(t−τ ) +2 exp−2(t−τ )
" #
A(t−τ ) A(t−τ ) 0 h i
exp Bu (τ ) = exp 1
1
" #
exp−(t−τ ) −2 exp−2(t−τ )
=
− exp−(t−τ ) +2 exp−2(t−τ )

ˆ t ˆ t
exp−(t−τ ) −2 exp−2(t−τ )
expA(t−τ ) Bu (τ ) dτ = dτ
0 0 − exp−(t−τ ) +2 exp−2(t−τ )
" #
1
2 − exp−t + 12 exp−2t
=
exp−t − exp−2t

51
PID Controllers Lecture Notes by A. M. Muhia

" #
At
1
2 − exp−t + 12 exp−2t
x (t) = exp x (0) +
exp−t − exp−2t
" #" # " #
2 exp−t − exp−2t exp−t −2 exp−2t x1 (0) 1
2 − exp−t + 12 exp−2t
= +
−2 exp−t +2 exp−2t − exp−t +2 exp−2t x2 (0) exp−t − exp−2t

2.8 Controllability and Observability

These tells us whether it is at all possible to control all the states of the system completely by
suitable choice of input and whether it is possible to reconstruct the states of a system from its
input and outputs

Controllability

For the linear time invariant system

ẋ (t) = Ax (t) + Bu (t)

y (t) = Cx (t) (1)

The system is said to be controllable if it is possible to find some input u (t) that will transfer
the initial state of the system x (0) to the origin of the state space, x (t0 ) = 0 with t0 finite. The
solution of the state equation yields

ˆ t
x (t) = Φ (t) x (0) + Φ (t − τ ) Bu (τ ) dτ (2)
0

where Φ (t) = expAt


For the system to be controllable

ˆ t
x (0) = Φ (t0 ) x (0) + Φ (t0 − τ ) Bu (τ ) dτ = 0 (3)
0
with finite t0
A linear time invariant continuous time system is completely controllable iff the RANK of the
controllability matrix M is equal ton

h i
M= B AB A2 B . . . An−1 B (4)

The rank of a matrix A is the maximum number of linearly independent columns of A; that is, it
is the order of the largest non singular matrix contained in A. This implies that the controllability
matrix M must be non singular for the system to be completely controllable.

52
PID Controllers Lecture Notes by A. M. Muhia

If a system is not completely controllable, it implies that it has one or more natural modes that
cannot be affected by the input directly or indirectly.
Example 1.14
Determine whether the system represented by the given state space is controllable
" # " #" # " #
ẋ1 0.5 0 x1 0
= + u (t)
ẋ2 0 −2 x2 1

Solution
The controllability matrix is given by

h i
M= A AB

" #" # " #


0.5 0 0 0
AB = =
0 −2 1 −2
" #
0 0
M=
1 −2

0 0
M = =0

1 −2

The matrix is singular hence the system is uncontrollable


This is more obvious if we write the two differential equations separately as

ẋ1 = 0.5x1

ẋ2 = −2x2 + u (t)

It is evident that whereas x2 can be changed byu (t) the state x1 is unaffected by our choice of
the inputs since it is not coupled either directly to the input or to the state x2 hence this state of
x1 (0) exp−0.5t is uncontrollable
On the other hand if we had

ẋ1 = 0.5x1 + x2

ẋ2 = −2x2 + u (t)

The controllability matrix is obtained as

53
PID Controllers Lecture Notes by A. M. Muhia

" #
0 1
M=
1 −2

0 1
M = = −1

1 −2

The matrix is nonsingular hence the system is controllable.


x1 can be controlled indirectly through x2

Observability

The linear time invariant system is said to be observable if the initial condition x (0)can be deter-
mined from the output function y (t) for 0 < t < t1 where t1 is finite

y (t) = Cx (t)

ˆ t
= CΦ (t) x (0) + C Φ (t − τ ) Bu (τ ) dτ (5)
0

Thus given u (t) and y (t)for 0 < t < t1 with t1 being some finite value, the system is observable if
equation (60) can be solved forx (0)
The system is observable if the observability matrix N is nonsingular i.e. the rank of N is equal
ton


C

CA

N = .

..

CAn−1

Example 1.15
Consider the system represented by
" # " #" # " #
ẋ1 0.5 0 x1 0
= + u (t)
ẋ2 0 −2 x2 1
" #
h i x
1
y (t) = 0 1
x2

Determine whether the system is observable


Solution
The observability matrix is given by

54
PID Controllers Lecture Notes by A. M. Muhia

" #
C
N=
CA
" # " #
h i 0.5 0 0 1
CA = 0 1 =
0 −2 0 −2

0 1
N = =0

0 −2

The matrix is singular and therefore the system is unobservable


The state x1 does not affect the output nor does it affect the state x2 which is coupled to the
output

2.9 Pole Placement Design Technique

State space design enables the design of a system having the desired closed loop poles or desired
characteristic equation
It also enables inclusion of initial conditions if necessary
Pole placement design is based on the state model of the system. We assume that all the state
variables are measurable and are available for feedback
State model equations

ẋ (t) = Ax (t) + Bu (t) (1)

The plant input u (t) is made a function of the states of the form

u (t) = f (x (t)) (2)

Equation (62) is called the control rule or control law. In pole placement design, the control law is
specified as a linear function of the states of the form

u (t) = −kx (t) (3)

55
PID Controllers Lecture Notes by A. M. Muhia

This control law allows the poles of the closed system to be placed in any desirable location and is
expressed as

u (t) = −k1 x1 (t) − k2 x2 (t) − . . . − kn xn (t) (4)

The design problem is the specification of the desired root locations of the systems characteristic
equations and the calculations of the gains ki to yield these desired root locations.
A necessary and sufficient condition that the closed-loop poles can be placed at any arbitrary
location in the s-plane is that the system must be completely state controllable.

Determination of the MatrixK

1. Using Direct substitution method


If the system is of low order, direct substitution of matrixK into the desired characteristic
polynomial may be simpler.
e.g. if n = 3 and the desired poles are µ1 , µ2 and µ3 then

h i
K = k1 k2 k3 (5)

Desired characteristic polynomial

(s − µ1 ) (s − µ2 ) (s − µ3 )

This is also obtained as

|[sI − A + BK]|

We equate

(s − µ1 ) (s − µ2 ) (s − µ3 ) = |[sI − A + BK]|

to obtain the values of ki


Example 1.16
Consider the system

56
PID Controllers Lecture Notes by A. M. Muhia

ẋ (t) = Ax (t) + Bu (t)

where  
0 1 0
A= 0 0 1
 

−1 −5 −6
 
0
B = 0
 

The system uses the state feedback control law u (t) = −kx (t). It is desired to have closed
loop poles at s = −2 ± j4 , s = −10.
Determine the state feedback gain matrix K
Solution
We first check for controllability of the system

h i
M= B AB A2 B


0 0 1

M = 0 1 −6 = −1


1 −6 31

The matrix is non singular hence the system is completely state controllable
Next we solve forK
Let h i
K = k1 k2 k3

|[sI − A + BK]|

     
s 0 0 0 1 0 0 h
i
= 0 s 0 −  0 0 1  + 0 k1 k2 k3
     

0 0 s −1 −5 −6 1

= s3 + (6 + k3 ) s2 + (5 + k2 ) s + (1 + k1 ) .........(i)

The desired characteristic equation is

57
PID Controllers Lecture Notes by A. M. Muhia

(s + 2 − j4) (s + 2 + j4) (s + 10)

= s3 + 14s2 + 60s + 200.........(ii)

Comparing equations (i) and (ii)

k3 = 8 k2 = 55 k1 = 199

h i
K = 199 55 8

u = −199x1 − 55x2 − 8x3

2. Using Ackermann’s Formula


Cayley Hamilton Theorem
Every square matrix satisfies its own characteristic equation i.e. if the characteristic equation
of the nth order square matrix is

|λI − A| = 0

|λI − A| = λn + α1 λn−1 + α2 λn−2 + · · · + αn−1 λ + αn I = 0 (6)

Then

An + α1 An−1 + α2 An−2 + · · · + αn−1 A + αn I = 0 (7)

To obtain the inverse of the matrix A we divide equation byA on both sides

An−1 + α1 An−2 + α2 An−3 + · · · + αn−1 I + αn A−1 = 0

−1 h n−1 i
A−1 = A + α1 An−2 + α2 An−3 + · · · + αn−1 I (8)
αn

Example 1.17
Determine the inverse of the matrix

58
PID Controllers Lecture Notes by A. M. Muhia

 
1 2 7
A = 4 2 3
 

1 2 1
using Cayley Hamilton Theorem
Solution

|λI − A| = 0


λ − 1 −2 −7

−4 λ−2 −3 = 0


−1 −2 λ − 1

λ3 − 4λ2 − 20λ − 35 = 0

−1 h 2 i
A−1 = A − 4A − 20I
35

 
−4 11 −5
−1 
= −1 −6 25 

35
6 1 −10

Consider the state equation


ẋ = Ax + Bu (9)

Where the state feedback control law

u (t) = −kx (t)

We assume that the system is completely state controllable and that the desired closed loop
poles are at
s = µ1, s = µ2 . . . . . . s = µn

Equation(69) becomes

ẋ = [A − Bk] x (10)

Defining  = A − Bk
Then the desired characteristic equation is

sI − Â = 0

59
PID Controllers Lecture Notes by A. M. Muhia

(s − µ1 ) (s − µ2 ) . . . . . . (s − µn ) = 0

sn + α1 sn−1 + α2 sn−2 + · · · + αn−1 s + αn = 0

Use of Cayley Hamilton theorem which states that  satisfies its own characteristic equation
we obtain

 
Φ Â = Ân + α1 Ân−1 + α2 Ân−2 + · · · + αn−1 Â + αn I = 0 (11)

To derive Ackermann’s formula, we consider the case when n = 3


Considering that
I=I (12)

 = A − Bk (13)

Â2 = [A − Bk]2

= A2 − 2ABK − B 2 K 2

= A2 − ABK − BK Â (14)

Â3 = [A − Bk]2 [A − BK]

h i
= A2 − ABK − BK Â [A − BK]

= A3 − A2 BK − ABK Â − BK Â2 (15)

Multiplying both sides of equations (72), (73), (74) and (75)with, α3 ,α2 α1 and α0 where
α0 = 0

α3 I = α3 I

α2 Â = α2 [A − Bk]

60
PID Controllers Lecture Notes by A. M. Muhia

h i
α1 Â2 = α1 A2 − ABK − BK Â

Â3 = A3 − A2 BK − ABK Â − BK Â2

Adding the two sides of the equations

Â3 +α1 Â2 +α2 Â+α3 I = A3 +α1 A2 +α2 A+α3 I−α2 BK−α1 ABK−α1 BK Â−A2 BK−ABK Â−BK Â2
(16)
Since

 
Â3 + α1 Â2 + α3 I = Φ Â = 0

And

A3 + α1 A2 + α2 A + α3 I = Φ (A) 6= 0

then

Φ (A) − α2 BK − α1 ABK − α1 BK Â − A2 BK − ABK Â − BK Â2 = 0 (17)

h i h i
Φ (A) = B α2 K − α1 K Â − K Â2 + AB α1 K − K Â + A2 BK

2
 
h i α2 K − α1 K Â − K Â
Φ (A) = B AB A2 B  α1 K − K Â (18)
 

K

α2 K − α1 K Â − K Â2
 
h i−1
B AB A2 B Φ (A) =  α1 K − K Â (19)
 

K
h i
Premultiplying both sides of (79) by 0 0 1 and rearranging we obtain

h ih i−1
K= 0 0 1 B AB A2 B Φ (A) (20)

For an arbitrary positive integer n, then

h ih i−1
K= 0 0 ... 1 B AB ... An−1 B Φ (A) (21)

61
PID Controllers Lecture Notes by A. M. Muhia

Example 1.18
Consider the system

ẋ (t) = Ax (t) + Bu (t)

where  
0 1 0
A= 0 0 1
 

−1 −5 −6
 
0
B = 0
 

The system uses the state feedback control law u (t) = −kx (t). It is desired to have closed
loop poles at
s = −2 ± j4 , s = −10.
Determine the state feedback gain matrix K using Ackermann’s formular
Solution
The desired characteristic equation is given by

(s + 2 − j4) (s + 2 + j4) (s + 10) = s3 + 14s2 + 60s + 200

Φ (A) = A3 + 14A2 + 60A + 200I

where

 
0 1 0
A= 0 0 1
 

−1 −5 −6
 
199 55 8
Φ (A) =  −8 159 7 
 

−7 −43 117
 
h 0 i 0 1
B AB 2
A B = 0

1 −6

1 −6 31

h ih i−1
K= 0 0 1 B AB A2 B Φ (A)

62
PID Controllers Lecture Notes by A. M. Muhia

  
h i 0 0 1 199 55 8
K= 0 0 1 0 1 −6  −8 159 7 
  

1 −6 31 −7 −43 117

h i
K = 199 55 8

3. Using Transformation Matrix T


Suppose that the system defined by

ẋ = Ax + Bu

and the control law is given by u (t) = −kx (t)


The matrix K can be obtained as follows

(a) Check the controllability condition for the system. If the system is completely control-
lable then
(b) Determine the characteristic polynomial of the matrix A

|sI − A| = sn + a1 sn−1 + a2 sn−2 + · · · + an−1 s + an

Determine the values ofa1 , a2 . . . an


(c) Determine the transformation matrix T that can transform the system state equation in
controllable canonical form. If the system is already in controllable canonical form then
T =I
Else
T = MW

Where M is the controllability matrix and


 
an−1 an−2 . . . a1 1
an−2 an−3 . . . 1 0
 
 
 . .    . .
 
W = . .    . .
 
 
 . .    . .
 
 a1 1 . . . 0 0
 

1 −an−1 . . . 0 0

(d) Write the desired characteristic polynomial

(s − µ1 ) (s − µ2 ) . . . . . . (s − µn ) = sn + α1 sn−1 + α2 sn−2 + · · · + αn−1 s + αn

63
PID Controllers Lecture Notes by A. M. Muhia

Determine the values of α1 , α2 . . . αn


(e) Obtain the matrix K as
h i
K = αn − an αn−1 − an−1 α1 − a1 T −1 (22)

Example 1.19
Consider the system

ẋ (t) = Ax (t) + Bu (t)

where  
0 1 0
A= 0 0 1
 

−1 −5 −6
 
0
B = 0
 

The system uses the state feedback control law u (t) = −kx (t). It is desired to have closed
loop poles at
s = −2 ± j4 , s = −10.
Determine the state feedback gain matrix K using transformation matrix T
Solution
Check the system for controllability

h i
M= B AB A2 B


0 0 1

M = 0 1 −6 = −1


1 −6 31

The matrix is nonsingular hence the system is completely state controllable


Obtain the characteristic polynomial of the matrix A

|[sI − A]|

   
s 0 0 0 1 0

= 0 s 0 −  0 0 1 
   

0 0 s −1 −5 −6

64
PID Controllers Lecture Notes by A. M. Muhia

= s3 + 6s2 + 5s + 1

Comparing with

s3 + a1 s2 + a2 s + a3

then

a1 = 6, a2 = 5, a3 = 1

The desired characteristic polynomial is given by

(s + 2 − j4) (s + 2 + j4) (s + 10) = s3 + 14s2 + 60s + 200

Comparing this with s3 + α1 s2 + α2 s + α3 then

α1 = 14, α2 = 60, α3 = 200

Since the system is already in controllable canonical form then T = I


The matrix Kis obtained as

h i
K = α3 − a3 α2 − a2 α1 − a1 T −1

h i
K = 199 55 8

2.10 State Observer Design

In the pole placement design approach, we assumed that all state variables are available for feedback.
In practice, however, not all the state variables are available for feedback. Then we need to estimate
unavailable state variables. A state observer estimates the state variables based on the measurement
of output and control variables
If a state observer estimates all state variables of the system, regardless of whether some state
variables are available for direct measurement, it is called a full order state observer. A necessary
and sufficient condition for observer design is that the system must be completely state observable.
Consider the system
ẋ = Ax + Bu

y = Cx (1)

65
PID Controllers Lecture Notes by A. M. Muhia

The state variables can be estimated from the measured output and control variables

where
G-state observer gain matrix
ỹ-estimated output
x̃-estimated state variables
For the observer

˜ = Ax̃ + G (y − ỹ) + Bu

ỹ = C x̃ (2)

But y = Cx and ỹ = C x̃
Equation (84) becomes

˜ = Ax̃ + GC (x − x̃) + Bu
ẋ (3)

Subtracting (85) from (83) we obtain

˜ = A [x − x̃] − GC [x − x̃]
ẋ − ẋ

y − ỹ = C [x − x̃] (4)

˜ = ẋand
Taking x − x̃ = x̂ , ẋ − ẋ ˆ y − ỹ = ŷequation (86) becomes

ˆ = [A − GC] x̂

ŷ = C x̂ (5)

66
PID Controllers Lecture Notes by A. M. Muhia

We can choose appropriate Eigen values of [A − GC] to enable placement of poles of the closed
loop system at desired locations
The control design problem is to determine the matrix G;(n ∗ 1) matrix
where

 
g1
 g2 
 
G=
 .. 

.
gn

Determining the matrix G

1. Using Direct Substitution method


Similar to the case of pole placement, if the system is of low order, then direct substitution
of the matrix G into the desired characteristic polynomial may be simpler. e.g. if x is a 3
vector then G can be written as  
g1
G = g2 
 

g3

Substituting this G into the desired characteristic polynomial

|sI − (A − GC)| = (s − µ1 ) (s − µ2 ) (s − µ3 )

By equating the coefficients of the powers on both sides of this equation, we can obtain the values
of g1 , g2 and g3
Example 1.20
Consider the system
ẋ = Ax + Bu

y = Cx

Where
" #
0 20.6
A=
1 0
" #
0
B=
1

h i
C= 0 1

67
PID Controllers Lecture Notes by A. M. Muhia

Design a full order state observer assuming that the desired Eigen values of the observer matrix
are µ1 = −10 µ2 = −10
Solution
Test the system for observability
Observability matrix " # " #
C 0 1
N= =
CA 1 0

|N | = −1

The system is completely state observable and determination of observer gain matrix is possible
Let " #
g1
G=
g2

|sI − (A − GC)| = 0


s −20.6 + g1
= s2 + g2 s − 20.6 + g1 = 0


−1 s + g2

The desired characteristic equation is given by

(s − µ1 ) (s − µ2 ) = (s + 10) (s + 10) = 0

s2 + 20s + 100 = 0

Comparing the two equations


g2 = 20and g1 = 120.6 " #
120.6
G=
20

Using Ackermann’s Formula


The observer gain matrix is given by

 −1  
C
0
 CA  0
   
G = Φ (A) 
 ..   .. 
  
 .  .
CAn−1 1

Where

68
PID Controllers Lecture Notes by A. M. Muhia

Φ (A) = An + α1 An−1 + α2 An−2 + · · · + αn−1 A + αn I

Example 1.21
Determine the observer gain matrix for example 1.20 using Ackermann’s formula
Solution

" #−1 " #


C 0
G = Φ (A)
CA 1
" #
0 20.6
A=
1 0

h i
C= 0 1

The desired characteristic equation is given by

(s − µ1 ) (s − µ2 ) = (s + 10) (s + 10)

s2 + 20s + 100

Φ (A) = A2 + 20A + 100I

" #2 " # " #


0 20.6 0 20.6 1 0
= + 20 + 100
1 0 1 0 0 1
" #
120.6 412
=
20 120.6
" #" #" # " #
120.6 412 1 0 0 120.6
G= =
20 120.6 0 1 1 20

Using Transformation Matrix Q


Following the same procedure as in deriving the state feedback matrix K then

1. Check the observability condition for the system. If the system is completely observability
then

2. Determine the characteristic polynomial of the matrix A

|sI − A| = sn + a1 sn−1 + a2 sn−2 + · · · + an−1 s + an

69
PID Controllers Lecture Notes by A. M. Muhia

Determine the values ofa1 , a2 . . . an

3. Determine the transformation matrix Q that can transform the system state equation in
controllable canonical form. If the system is already in observable canonical form then Q=I
Else
Q = WN

WhereN is the observability matrix and

 
an−1 an−2 . . . a1 1
an−2 an−3 . . . 1 0
 
 
 . .    . .
 
W = . .    . .
 
 
 . .    . .
 
 a1 1 . . . 0 0
 

1 −an−1 . . . 0 0

4. Write the desired characteristic polynomial

(s − µ1 ) (s − µ2 ) . . . . . . (s − µn ) = sn + α1 sn−1 + α2 sn−2 + · · · + αn−1 s + αn

Determine the values of α1 , α2 . . . αn

5. Obtain the observer gain matrix G as


 
αn − an
αn−1 − an−1 
 
G = Q
 .. 
 (6)
 . 
α1 − a1

Example 1.21
Determine the observer gain matrix for example 1.20 using the transformation matrix method
Solution
Observability has already been tested
" #
0 20.6
A=
1 0

h i
C= 0 1

" #
s −20.6
|sI − A| = = s2 − 20.6
−1 s

70
PID Controllers Lecture Notes by A. M. Muhia

Comparing this with s2 + a1 s + a2


a1 = 0, a2 = −20.6
The system is already in state observable form, therefore
" #
1 0
Q=I=
0 1

The desired characteristic equation is

(s − µ1 ) (s − µ2 ) = (s + 10) (s + 10) = 0

s2 + 20s + 100 = 0

From this we obtain the values of α1 = 20 , α2 = 100


" #
α2 − a2
G=Q
α1 − a1
" #" # " #
1 0 120.6 120.6
= =
0 1 20 20

71
PID Controllers Lecture Notes by A. M. Muhia

3 Non-Linear Control
Linear systems are systems which obey the principle of superposition and proportionality
If an input x1 produces an output y1 by proportionality theorem an input αx1 produces an output
αy1
Similarly if an input βx2 produces an output βy2 by superposition, an input αx1 + βx2 will produce
an output αy1 + βy2
Linear systems are systems where mathematical tools like Laplace, Fourier etc can be used. These
systems can be analyzed mathematically and graphically
Non-Linear systems exhibit peculiar behavior

1. They do not obey the law of superposition. As a result standard test signals lose their meaning
as the same signal gives different outputs at different operating points

2. The stability of linear systems depends only on the root location and is independent of the
initial state. In non-linear systems the stability depends on root location as well as initial
condition and type of input.

3. Non-linear systems exhibit self sustained oscillations of fixed frequency and amplitude called
Limit Cycles . Linear systems do not have the feature.

4. Linear systems are described by linear differential equations and it is usually possible to obtain
closed form solutions for linear systems. In general this is not possible for non-linear systems

5. Generally the analysis and design methods developed for linear systems e.g root locus, pole
placement, etc apply to all linear systems.
There is no analysis and design method that is universally applicable to all non-linear systems

Unique properties of non-linear systems


1. Multiple equilibrium point
An equilibrium point is a point where the system can stay forever without moving

2. Limit cycles
These are oscillations of fixed amplitudes and fixed period without external excitations

3. Bifurcation
The change in the qualitative behavior of non-linear systems e.g change in number of equi-
librium points, number of limit cycles, stability of equilibrium points etc as a result of quan-
titative change in system parameters

4. Chaos
Small changes in the initial condition result in large and often unpredictable changes in the
system output

72
PID Controllers Lecture Notes by A. M. Muhia

5. Finite escape time


The situation where the state of an unstable non-linear system can go to infinity in finite time

Examples of non-linearities

Classified into two

1. Inherent (natural) non-linearities


those that naturally come with the systems hardware and motion e.g dead zone, saturation,
backlash, hysteresis etc

2. Intentional (artificial non-linearities)


those introduced into the system by the designer e.g relays, quantizers

3.1 Equilibrium Points

Consider an autonomous system without external inputs


ẋ = f (x t)
when x is the state vector with components x1 , x2 , . . . . . . xn and f (x t) is an n-state vector whose
elements are functions of x1 , x2 , . . . . . . xn andt
we assume that the system has a unique solution starting at a given initial state x (0) = x0
Suppose we have an autonomous system in which all states have settled down to constant values
(not necessarily zero). Such a system is said to be in equilibrium
The state xc is an equilibrium state of the system iff
ẋc = f (xc t) = 0
i.e since the states have settled to constant values
x˙c = 0
Example
Find the equilibrium state of the autonomous LTI system described by the state equations
" # " #" #
ẋ1 (t) −1 2 x1 (t)
=
ẋ2 (t) 1 −1 x2 (t)
Solution
At equilibrium ẋc = 0
0 = −x1 + 2x2

0 = x1 − 2x2
which gives a solution

73
PID Controllers Lecture Notes by A. M. Muhia

Figure 30: Common Non-linearities

x1 = x2 = 0
hence the equilibrium state for this system is at the origin. i.e
(x1c , x2c ) = (0 , 0)
In general a LTI autonomous system
ẋ = Ax
has a single equilibrium point at the origin if the matrix A is non singular.
If A is singular , the system has infinity equilibrium points

74
PID Controllers Lecture Notes by A. M. Muhia

Example
Find the equilibrium state of the non-linear system described by the state equations

 
ẋ1 = x2 − x1 x21 + x22

 
ẋ2 = −x1 − x2 x21 + x22

Solution
Let the equilibrium point be (x1e , x2e )
At equilibrium point
ẋ1e = ẋ2e = 0
Hence we can write
0 = x2e − x1e x21e − x22e .................(a)


0 = −x1e − x2e x21e − x22e ..............(b)




Multiplying (a) by x1e and (b) by x2e gives


0 = x1e x2e − x21e x21e − x22e .............(c)


0 = −x1e x2e − x22e x21e − x22e ..........(b)




Adding (c) and (d) gives


2
0 = x21e − x22e
whose only solution is
x1e = x2e = 0
Example
Find the equilibrium state of the non-linear system described by the state equations

ẋ1 = x2

ẋ2 = −x1 − x21 − x2

Solution
At equilibrium
0 = x2e

0 = −x1 − x21 − x2

x2e = 0 x1e = 0 or −1

75
PID Controllers Lecture Notes by A. M. Muhia

Hence the equilibrium state for the system are

(x1e , x2e ) = (0 , 0) and (−1 , 0)


In general a non-linear system may have multiple equilibrium points

3.2 Linearization of non linear state space model and local stability

Based on Taylor series expansion about the operating (equilibrium) point consider a scalar function
f (h). For small x1 Taylor,s approximation theorem states that

∂f (h) 1 ∂ 2 f (h)
f (h + x) = f (h) + x+ + ······
∂h 2! ∂h2

∂f (h)
' f (h) + x + higher order terms
∂h
If h = 0 then

∂f (h)
f (x) = f (0) + x + higher order terms
∂h

h=0
Now consider the system

ẋ = f (x)

Using Taylor’s Theorem for small changes of state x around the origin we can approximate the
system as

∂f (x)
ẋ = f (x) ' f (0) + x + higher order terms
∂x

x=0
In practice linear approximation of the system is done by neglecting the higher order terms
Since it is assumed that the equilibrium state of the system is at the origin then f (0) = 0 hence

∂f (x)
ẋ = x
∂x

x=0
This equation is used to approximate nonlinear system with a linear one
Since Taylor’s Theorem is only valid for small variations around a point , the model obtained is
only useful in analyzing the stability in the neighborhood of the equilibrium point at the origin i.e
local stability
For nonlinear system
ẋ = f (x , u)

76
PID Controllers Lecture Notes by A. M. Muhia

y = g (x , u)
with
n - state variables
r - inputs
m - outputs

   
ẋ1 f1 (x1 , x2 , . . . xn ; u1 u2 . . . ur )
ẋ2   f2 (x1 , x2 , . . . xn ; u1 u2 . . . ur )
   
 
 .. = .. 
   
 .   . 
ẋn fn (x1 , x2 , . . . xn ; u1 u2 . . . ur )
   
y1 g1 (x1 , x2 , . . . xn ; u1 u2 . . . ur )
 y2   g2 (x1 , x2 , . . . xn ; u1 u2 . . . ur )
   

 . = .. 
 .   
 .   . 
ym gm (x1 , x2 , . . . xn ; u1 u2 . . . ur )
The system is linearized to

∂ ẋ = A∂x + B∂u

∂y = C∂x + D∂u

Where A , B , C , and D are obtained as follows

 ∂f ∂f1 ∂f1 
1 ···
∂x ∂x2 ∂xn
 ∂f21 ∂f2 ∂f2 

 ∂x ∂x2 ··· ∂xn

A =  .1


 .
 .


∂fn ∂fn ∂fn
∂x1 ∂x2 ··· ∂xn

obtained at x = x0 , u = u0 equilibrium points

 ∂f ∂f2 ∂fn 
1 ···
∂u ∂u2 ∂ur
 ∂f21 ∂f2 ∂fn 

 ∂u1 ∂u2 ··· ∂ur

B= .


 .
 .


∂fn ∂fn ∂fn
∂u1 ∂u2 ··· ∂ur
 
∂g1 ∂g1 ∂g1
∂x1 ∂x2 ··· ∂xn
 
C=
 .. .. 
 . . 

∂gm ∂gm ∂gm
∂x1 ∂x2 ··· ∂xn

77
PID Controllers Lecture Notes by A. M. Muhia

 
∂g1 ∂g1 ∂g1
∂u1 ∂u2 ··· ∂ur
 
D=
 .. .. 
 . . 

∂gm ∂gm ∂gm
∂u1 ∂u2 ··· ∂ur

Example
Consider a system described dy the differential equation

u2
ÿ = 1 − 2
y
Let the state variable of the state equation be x1 = y , x2 = ẏ. Obtain

1. Non linear state space model

2. Linearized modes about equilibrium point given by u = 1 , i.e (u0 = 1) , (x10 = 1) , (x20 = 0)

Solution
x1 = y x2 = ẏ

2
ẋ1 = ẏ = x2 ẋ2 = ÿ = 1 − u2
x1
Therefore the non linear state space model is

" #  
ẋ1 x2
= 2 
ẋ2 1 − u2
x1

which we want to linearize


u0 = 1 we obtain x10 and x20
NB: At equilibrium points derivatives are equal to zero i.e
ẏ = ÿ = 0 or ẋ1 = ẋ2 = 0

ẏ = x2 ⇒ 0 = x20

2
ÿ = 0 = 1 − u2
x1

x10 = 1

x20 = 0

Suppose that y = x1 → linear


" #
h i x1
y= 1 0
x2

78
PID Controllers Lecture Notes by A. M. Muhia

to obtain A and B

" #  
ẋ1 x2 → f1 (x1 x2 u)
= 2 
ẋ2 1 − u2 → f2 (x1 x2 u)
x1

 
∂f1 ∂f1 " #
∂x1 ∂x2 0 1
A= ∂f2 ∂f2
 =
∂x1 ∂x2 x10 2 0
x20
" ∂f # " #
1 0
B= ∂u =
∂f2
∂u
−2
u0 =1

⇒ linearized model

      

 1 =
0 1   x1   0 
+ u
ẋ2 2 0 x2 −2
 
h x1 i
y= 1 0 
x2
Stability of Linearized system

The Stability of Linearized system is analyzed as follows

1. If the Eigen values of the linearized system i.e strictly on the left half plane , the equilibrium
point of the non linear system is asymptotically stable

2. If at least one of the Eigenvalues of the linearized system lies in the right half plane , the
equilibrium point of the non linear system is unstable

3. If the Eigenvalues of the linearized system lies in the left half plane , but at least one of them
lies on the imaginary axis , no conclusion can be drawn about the stability of the equilibrium
plane of the non linear system

Example
Consider the nonlinear system

ẋ1 = x21 + x1 cos x2

ẋ2 = x2 + x21 + x1 sin x2

79
PID Controllers Lecture Notes by A. M. Muhia

obtain the linearized approximation of the system and check for the stability of the equilibrium
point
Solution
" # " #
ẋ1 x21 + x1 cos x2 f1 (x1 x2 )
=
ẋ2 x2 + x21 + x1 sin x2 f2 (x1 x2 )
 
∂f1 ∂f1 " #
∂x1 ∂x2 cos x2 2x2 − x1 sin x2
A= ∂f2 ∂f2
=
2x1 + sin x2 1 + x1 cos x2 x10
∂x1 ∂x2
x20

ẋ1 = 0 = x220 + x10 cos x20

ẋ2 = 0 = x20 + x210 + x10 sin x20

x10 = 0 x20 = 0

" #
1 0
A=
1 1
Eigenvalues


λI − A = 0


λ − 1 0
=0


−1 λ − 1

(λ − 1)2 = 0

λ1 = λ2 = 1

Eigenvalues lie on the right half plane


Therefore the equilibrium point of the non linear system is unstable

3.3 Describing functions

Describing functions method is a frequency response method which can be used to approximately
analyse and predict non linear behaviour

80
PID Controllers Lecture Notes by A. M. Muhia

Figure 31: Non-linear System

Consider the system below


The describing function is developed by applying a sinusoid to the non linearity i.e

m (t) = M sin ωt

Assumptions

1. There is only one non linear element in the system

2. The non linear component is time invariant

3. Corresponding to sinusoidal input m (t) = M sin ωt , only the fundamental component in the
output n (t)has to be considered

4. The non linearity has odd symmetry

The steady state n (t)is periodic and in general non sinusoidal. Thus n (t) can be represented by a
Fourier series as

∞ ∞
A0 X X
n (t) = + An cos nωt + Bn sin nωt
2
n=1 n=1

Where ˆ
2 T
An = n (t) cos nωt dt
T t0

ˆ
2 T
Bn = n (t) sin nωt dt
T t0

Where T is the period on the input sinusoid


Non linearity has odd symmetry → A0 = 0
Considering only the fundamental component of the Fourier series

n (t) = A1 cos ωt + B1 sin ωt

81
PID Controllers Lecture Notes by A. M. Muhia

= N1 sin (ωt + φ)

Where

q
N1 = A21 + B12

N1 sin (ωt + φ) = N1 ∠φ.............(a)

From (a) it can be seen that n (t)can be approximated as a sinusoid of the same frequency as m (t)
but not of the same magnitude and phase
The non linearity gain is given by

N1 ∠φ
N (M , ω) =
M
The equivalent gain is called the describing function and can be represented as follows

Figure 32: Describing function

The describing function N (M , ω) in general is a function of both amplitude and frequency of the
input sinusoid

Derivatives of Describing functions

1. Cubic non linearity


let n (t) = m3 (t)
The input must be assumed to be sinusoid i.e

m (t) = M sin ωt

n (t) = M 3 sin3 (ωt)

= M 3 sin ωt sin2 (ωt)


 
3 1 − cos 2ωt
= M sin ωt
2

82
PID Controllers Lecture Notes by A. M. Muhia

M3
= [sin ωt − sin ωt cos 2ωt]
2

M3
 
1
= sin ωt − (sin 3ωt) − sin ωt
2 2

M3
= [3 sin ωt − sin 3ωt]
4
Ignoring third harmonic

3M 3
n (t) = sin ωt
4

3M 3
= ∠0
4

N1 ∠φ
N (M , ω) =
M

3M 3
=
4M

3M 2
=
4
2. Determine the describing function for the 2 position relay non linearity shown in Figure 33
below

Figure 33: 2-Position Relay

For a sinusoid input m (t) = M sin ωt , the output of the relay is as shown below

83
PID Controllers Lecture Notes by A. M. Muhia

Figure 34: Output of 2-Position Relay

ˆ
2 T
A1 = n (t) cos ωt dωt
T 0

ˆ ˆ
1 π 1 2π
= v cos ωt dωt − v cos ωt dωt
π 0 π π

π 2π
v v
= sin ωt − sin ωt = 0

π 0 π π

ˆ
2 T
B1 = n (t) sin ωt dωt
T 0

ˆ ˆ
1 π 1 2π
= v sin ωt dωt − v sin ωt dωt
π 0 π π

84
PID Controllers Lecture Notes by A. M. Muhia

π 2π
v v
= − cos ωt + cos ωt

π 0 π π

v v
− [−1 − 1] + [1 + 1]
π π

4v
=
π

4v 4v
n (t) = = ∠0
π π

N1 ∠φ
N (M , ω) =
M

4v
=
πm
Example
Determine the describing function for the saturation non linearity shown in Figure 35 below

Figure 35: Saturation

Solution
n (t) is given by



 −ks m (t) < −s

n (t) = km (t) −s ≤ m (t) ≤ s



ks m (t) > s

For a sinusoid input m (t) = M sin ωt , the output of the saturation non linearity is as shown in
Figure 36 For the sinusoid input m (t) = M sin ωt , the output of the saturation non linearity is

85
PID Controllers Lecture Notes by A. M. Muhia

Figure 36: Output of Saturation

given by



 kM sin ωt 0 ≤ ωt ≤ θ1


θ1 ≤ ωt ≤ θ2



 ks

n (t) = kM sin ωt θ2 ≤ ωt ≤ θ3


−ks

 θ3 ≤ ωt ≤ θ4



θ4 ≤ ωt ≤ 2π

kM sin ωt

A1 = 0 Odd symmetry

ˆ
2 T
B1 = n (t) sin ωt dωt
T 0

86
PID Controllers Lecture Notes by A. M. Muhia

ˆ 2π
2
= n (t) sin ωt dωt
2π 0

ˆ
1 2π
= n (t) sin ωt dωt
π 0

ˆ
4 π/2
= n (t) sin ωt dωt
π 0

ˆ ˆ
4 θ1 4 π/2
= kM sin2 ωt dωt + ks sin ωt dωt
π 0 π θ1

ˆ ˆ
4kM θ1 1 − cos 2ωt 4ks π/2
= dωt + sin ωt dωt
π 0 2 π θ1

  θ π/2
2kM 1 1 4ks
= ωt − sin 2ωt −
cos ωt
π 2 0 π θ 1
 
2kM 1 4ks
= θ1 − sin 2θ1 + cos θ1
π 2 π
 
4 kM θ1 kM sin 2θ1
= − + ks cos θ1
π 2 4
q
2
s θ1 = sin−1 M
s 1− s2

but M sin θ1 = s sin θ1 = M cos θ1 =
M
" r r #
4 kM  s  2kM s s2 s2
= sin−1 − 1 − 2 + ks 1− 2
π 2 M 4 M M M
" r #
4 kM  s  ks s2
= sin−1 + 1− 2
π 2 M 2 M
" r #
2kM −1
 s  s s2
= sin + 1− 2
π M M M

the describing function is given by

N ∠0
M

" r #
2kM −1
 s  s s2
= sin + 1− 2
πM M M M

87
PID Controllers Lecture Notes by A. M. Muhia

" r #
2k −1
 s  s s2
= sin + 1− 2
π M M M

Example

Figure 37: Deadzone


0 0 < ωt < θ1
n (ωt) =
k [M sin ωt − A] θ1 < ωt < π2
ˆ
4 π/2
B1 = k [M sin ωt − A] sin ωt dωt
π θ1

ˆ
4k π/2 h i
= M sin2 ωt − A sin ωt dωt
π θ1

88
PID Controllers Lecture Notes by A. M. Muhia

ˆ
4k π/2 M
= [1 − cos 2ωt] − A sin ωt dωt
π θ1 2

"   π/2 π/2 #


4k M 1
= ωt − sin 2ωt + A cos ωt
π 2 2 θ θ1 1
 
4k M  π 
= − θ1 + A (0 − cos θ1 )
π 2 2
 
4k M  π 
= − θ1 + A cos θ1
π 2 2
 
4kM π A
= − θ1 + cos θ1
2π 2 M
q
2 2
 
A
But sin θ1 = M θ1 = sin−1 M
A cos θ1 = M −A 2 M
" r #
M 2 − A2
 
2kM π A A
= − sin−1 +
π 2 M M M2
" r #
M 2 − A2
 
2k π A A
N (M , ω) = − sin−1 +
π 2 M M M2

Example





−k [M − A] 0 < ωt < θ1


π
k [M sin ωt − A] θ1 < ωt <




 2
n (ωt) = k [M − n] π
2
< ωt < θ3


k [M sin ωt + A] θ3 < ωt < 3 π2







−k [M − A] 3 π2 < ωt < 2π

ˆ 2π
2
A1 = n (ωt) cos ωt dωt
2π 0
ˆ
1 2π
= n (ωt) cos ωt dωt
π 0

89
PID Controllers Lecture Notes by A. M. Muhia

Figure 38: Backlash

ˆ ˆ
1 θ1 1 π/2
= −k [M − A] cos ωt dωt + k [M sin ωt − A] cos ωt dωt
π 0 π θ1
ˆ ˆ
1 θ3 1 3π/2
+ k [M − n] cos ωt dωt + k [M sin ωt + A] cos ωt dωt
π π/2 π θ3
ˆ
1 2π
+ −k [M − A] cos ωt dωt
π 3π/2

90
PID Controllers Lecture Notes by A. M. Muhia

θ π/2
kM π/2 k [M − A]
θ
−k [M − A] 1 kA 3
= sin ωt − sin ωt + ωt + sin ωt
π 0 π θ1 2π θ1 π π/2
3π/2 3π/2 2π
kA kM k [M − A]
+ sin ωt + ωt − sin ωt
π θ 2π θ π 3π/2
3 3

−k [M − A] kA kA kM  π  k [M − A]
= sin θ1 − + sin θ1 + − θ1 + sin θ3
π π π 2 2  π
k [M − A] kA kA kM 3π k [M − A]
− − − sin θ3 + − θ3 +
π π π 2π 2 π

3.4 Phase plane Technique

Phase plane analysis is a graphical method used to study second order dynamic systems
The basic idea is to generate a two dimensional plane called the phase plane , motion trajecto-
ries corresponding to various initial conditions and then to examine the qualitative feature of the
trajectories
The phase plane is a plane which has co ordinate axis a time dependent variable of the system and
the time derivative of that variable so that a locus (trajectory) plotted in the plane depicts the
evolution of the variable in time
An illustration of the phase plane trajectory derived from a conventional time response function is
shown below

Figure 39: Phase Plane Trajectory

A point on the trajectory determines the state of the system at a particular instant
As a graphical method , it allows you to visualize what goes on in a non linear system starting

91
PID Controllers Lecture Notes by A. M. Muhia

from various initial conditions without having to solve the non linear equation analytically.
The fundamental disadvantage of the method is that it is restricted to second order (or first order)
systems because the graphical study of higher order systems is computationally and geometrically
complex

Phase Portraits

The phase plane method is concerned with the graphical study of second order autonomous systems
described by

ẋ1 = f1 (x1 , x2 ) (1)


ẋ2 = f2 (x1 , x2 )

Where x1 and x2 are the states of the system and f1 and f2 are non linear functions of the states
Geometrically, the state space of the system is a plane having x1 and x2 as co ordinates
Given a set of initial conditions x (0) = x0 , equation (1) defines a solution x (t). With time t varied
from zero to infinity, the solution x (t) can be represented geometrically as a curve in the phase
plane. Such a curve is called a phase plane.
A family of phase plane trajectories correspond to various initial conditions is called a phase portrait
of a system
Example 1
Mass-spring system

Figure 40: Mass-spring system

d2 x
a=
dt2

X
f =0

ma + kx = 0

92
PID Controllers Lecture Notes by A. M. Muhia

d2 x
m + kx = 0
dt2

ẍ + x = 0 (1)

Assuming the mass is initially at rest at length x0

m2 + 1 = 0

m = ±j ⇒ α ± jβ α=0 β=1

x (t) = expαt [A cos βt + B sin βt]

x (t) = [A cos t + B sin t]

Initial conditions

x (0) = A = x0

x (t) = x0 cos t
ẋ (t) = −x0 sin t (2)

Eliminating t from the above equation we obtain the equation for the trajectories

x2 + ẋ2 = x20

This represents a circle in the phase plane corresponding to different initial conditions , circles of
different radii can be obtained
Plotting these circles on the phase plane , we obtain a phase portrait of the mass-spring system
Once the phase portrait of a system is obtained , the nature of the system response corresponding
to various initial conditions is directly displayed on the phase plane. In the above examples, the
state trajectories neither converge on the origin nor diverge at infinity. They simply circle around
the origin, indicating the marginal nature of the system’s stability
Example 2
A torque T (t)is applied to the system which causes a deflection θ (t). Obtain the system equation
and use this equation to draw the phase portraits
Solution

93
PID Controllers Lecture Notes by A. M. Muhia

Figure 41: Phase plane plots for the mass-spring system

Figure 42: Example 2

d2 φ (t) dφ (t)
J 2
+D + Kφ (t) = T (t)
dt dt
Assuming the system is unexcited i.e T (t) = 0

d2 φ (t) D dφ (t) K
+ + φ (t) = 0
dt2 J dt J
Comparing to standard second order differential equation

d2 φ (t) dφ (t)
+ 2ξωn + ωn2 φ (t) = 0
dt2 dt

D K
2ξωn = ωn2 =
J J
For undamped system, the equation becomes

d2 φ (t)
+ ωn2 φ (t) = 0
dt2

CE m2 + ωn2 = 0

m = ±jωn

94
PID Controllers Lecture Notes by A. M. Muhia

The solution is that of a simple harmonic motion of a conservative system i.e

θ (t) = R sin (ωn t + φ)

dθ (t)
= ωn R cos (ωn t + φ)
dt

[θ (t)]2 = R2 sin2 (ωn t + φ)

1 dθ (t) 2
 
= R2 cos2 (ωn t + φ)
ωn dt

1 dθ (t) 2
 
+ [θ (t)]2 = R2
ωn dt
Equation above describes a phase portrait which is a set of ellipses with axis R and Rωn
dθ(t) dθ(t)
Normalization of the velocity by plotting ω1n dt rather than dt results in the portraits which
is a family of circles of radius R

Figure 43: Phase plane plots

R1 , R2 and R3 specify different initial conditions


When a finite damping ratio exists in the system the equation becomes more complicated

d2 φ (t) dφ (t)
2
+ 2ξωn + ωn2 φ (t) = 0
dt dt
Characteristic equation
m2 + 2ξωn + ωn2 = 0

95
PID Controllers Lecture Notes by A. M. Muhia

p
−2ξωn ± 4ξ 2 ωn2 − 4ωn2
m=
2

p
−ξωn ± jωn ξ2 − 1

The phase portraits can be constructed for different values of ξ


e.g for ξ = 0.7

Figure 44: Phase plane plots

Example 3

ẋ = −4x + x3

There are three singular points defined by −4x + x3 = 0 i.e


x=0, x = −2 , and x=2
The phase portrait of the system consists of a single trajectory as shown below

Figure 45: Example 3 Phase portrait

The arrows in the figure indicate the direction of motion (determined by sign of ẋ at that point)
The equilibrium point x = 0 is stable while the other two are unstable

96
PID Controllers Lecture Notes by A. M. Muhia

3.5 Lyapunov Stability Criterion

Lyapunov stability methods are applicable to linear and non linear as well as time-invariant and
time-varying systems.
Consider a two dimensional state space
If the system has an initial state x0 , the path traced by the state x (t) for t ≥ 0 is known as the
system trajectory

Figure 46: System trajectory

For a two dimensional state space

q
|x| = x21 + x22

For an n-dimensional state space

q
|x| = x21 + x22 + · · · · · · + x2n

Suppose that state vector x lies within a hyper-spherical region of radius R i.e |x| < R
In a two dimensional system , the hyper-spherical region would be a circle of radius R , for a three
dimensional system , hyper-spherical region would be a sphere etc
Defining two region δ and  with R such that δ <  < R

DEFINITIONS

Lyapunov Stability
A system is said to be stable (Lyapunov stable) if for every radius  , there exists a radius δ <  so
that every trajectory starting within the region of radius δ will always remain within radius  with
continuously increasing time
Asymptotic Stability
If in addition to satisfying the Lyapunov stability condition , the trajectory starting with a radius
δ converges to the origin with continuing time, the system is said to be asymptotically stable
Asymptotic Stability in the Large (global stability)

97
PID Controllers Lecture Notes by A. M. Muhia

Figure 47: Lyapunov stability regions

If a system is Lyapunov stable and all the trajectories starting anywhere in the state space converge
to the origin with continuing time the system is said to be asymptotically stable in the large
illustration

Figure 48: Lyapunov stability

1. Lyapunov stable trajectory

2. asymptotically stable

3. unstable

98
PID Controllers Lecture Notes by A. M. Muhia

Lyapunov Functions

Let us define a scalar function v (x) i.e a scalar function of a vector x

Definition

Positive definite function


A scalar function v (x)is positive definite in a region if
v (x) > 0 x 6= 0

v (x) = 0 x=0

example v (x) = x21 + x22


Negative definite function
A scalar function v (x) is negative definite if −v (x) is positive definite
example −x2 − (3x1 + 2x2 )2 = v (x)
Positive semi definite function
A scalar function v (x)is positive semi definite in a region if
v (x) ≥ 0 x 6= 0

v (x) = 0 x=0

example v (x) = (x1 + x2 )2


Negative semi definite function
A scalar function v (x)is negative semi definite if −v (x) is positive semi definite

There is no general way to tell if a function is positive or negative definite. However, if v (x)is in
quadratic form , we can use Sylvester’s criterion to determine if v (x) is positive definite.

The Quadratic form

The quadratic form is given by

v (x) = xt px

  
p11 p12 p13 ... p1n x1
 p12
i p22 p23 ... p2n   x2 
h  
x1 x2 ··· xn 
 ..
 . 
 . 
 .  . 
p1n p2n p3n ... pnn xn

99
PID Controllers Lecture Notes by A. M. Muhia

where p is the real symmetric matrix


Sylvester’s criterion states that the necessary and sufficient conditions that the quadratic form
v (x) = xt px is positive definite is that all successive principal minors at p be positive
i.e
p11 > 0

p p12
11
>0
p12 p22


p11 p12 p13

p12 p22 p23 > 0


p13 p23 p33
Example
Use Sylvester’s criterion to show that the function below is positive definite

v (x) = 10x21 + 4x22 + x23 + 2x1 x2 − 2x2 x3 − 4x1 x3

Solution
For a third order Sylvester, the quadratic form is given by

  
h i p11 p12 p13 x1
v (x) = xt px = x1 x2 x3 p12 p22 p23  x2 
  

p13 p23 p33 x3

= p11 x21 + p22 x22 + p33 x23 + 2p12 x1 x2 + 2p13 x1 x2 + 2p23 x2 x3

it follows that

 
10 1 −2
p= 1 4 −1
 

−2 −1 1

p11 = 10 > 0


p p12 10 1
11
= = 39 > 0
p12 p22 1 4

p11
p12 p13 10 1 −2
p12 p22 p23 = 1 4 −1


p13 p23 p33 −2 −1 1

100
PID Controllers Lecture Notes by A. M. Muhia

= 10 (4 − 1) − 1 (1 − 2) − 2 (−1 + 8) > 0

The matrix satisfies Sylvester’s criterion for positive definition so the function is positive definite

Lyapunov Functions

Suppose v (x) has the following properties

• v (0) = 0

• v (x) > 0 for x6= 0

• v (x) is continuous and has continuous derivative w.r.t all components of x

• v̇ (x) ≤ 0 i.e the time derivative of v (x) is less or equal to zero along the trajectories of the
system

such a function is known as a Lyapunov Function


If there exists such a function v (x) and it has continuous partial derivative and v̇ (x) along the
trajectories of the system , then the equilibrium state at the origin is asymptotically stable
Conversing, if the origin of a particular system is asymptotically stable , the Lyapunov function
with the required properties will always exist
NB:

• finding a Lyapunov function for a non linear system may be quite difficult

• failure to find a Lyapunov function for a particular system does not mean that Lyapunov
function for that system does not exist

• the Lyapunov function for a system is not unique

Lyapunov Stability Analysis

Consider the linear time invariant (LTI) system

ẋ = Ax............(1)

let v (x) be in quadratic form

v (x) = xT px....(2)

By use of the chain rule we can write

v̇ (x) = ẋT px + xT pẋ........(3)

101
PID Controllers Lecture Notes by A. M. Muhia

Substituting (1) in (3)

v̇ (x) = (Ax)T px + xT p (Ax)


 
xT AT p + pA x

= −xT ϕx

where  
ϕ = − AT p + pA

To satisfy v̇ (x) < 0


then xT ϕx must be positive definite
Instead of specifying a positive definite matrix p and examining whether or not ϕ is positive definite
it is more convenient to specify a positive definite matrix ϕ first and then work through the equation
ϕ = − AT p + pA to determine p and then check if p is positive definite


If p is positive definite , then the system is asymptotically stable


Usually ϕ is chosen as the identity matrix
NB: any p and ϕ matrix where p is real-symmetric , both p and ϕ are positive definite
To determine p we equate AT p + pA to −ϕ everywhere by element
If the system is stable, then xT px is the Lyapunov function for the system
Example
Use Lyapunov method to determine if the system below is stable
" # " #" #
ẋ1 −1 2 x1
=
ẋ2 1 −1 x2
Solution
" #
T −1 1
A =
2 −1
" #
p11 p12
Let p = (a real symmetric matrix)
p12 p22
" #
1 0
ϕ=
0 1
then " #" # " #" #
h
T
i −1 1 p11 p12 p11 p12 −1 2
A p + pA = +
2 −1 p12 p22 p12 p22 1 −1

102
PID Controllers Lecture Notes by A. M. Muhia

" #
−1 0
=
0 −1
or " # " #
−2p11 + 2p12 2p11 − 2p12 + p22 −1 0
=
2p11 − 2p12 + p22 4p12 − 2p22 0 −1
which gives the simultaneous equation

−2p11 + 2p12 = −1

2p11 − 2p12 + p22 = 0

4p12 − 2p22 = −1

solving for p11 p12 and p22 gives


p11 = −0.25 p12 = −0.75 p22 = −1
" #
−0.25 −0.75
p=
−0.75 −1
The matrix p above does not meet Sylvester’s criterion for position definiteness
Hence the system is not stable and a Lyapunov function for this system does not exist
Example
" # " #" #
ẋ1 −5 4 x1
=
ẋ2 2 1 x2
" #
1 7
p= 3 12
7 11
12 6

⇒stable

⇒ v (x) = xT px

" #" #
h i 1 7 x1
= x1 3 12
x2 7 11
12 6 x2

1 2 7 11
= x + x1 x2 + x22
3 1 6 6

103
PID Controllers Lecture Notes by A. M. Muhia

" # " # " #


−5 −4 T −5 2 p11 p12
x= x = p=
2 1 −4 1 p12 p22

 
−ϕ = AT p + pA

" #" # " #" # " #


−5 2 p11 p12 p11 p12 −5 −4 −1 0
+ =
−4 1 p12 p22 p12 p22 2 1 0 −1
" # " #
−5p11 + 2p12 −5p12 + 2p22 −5p11 + 2p12 −4p11 + p12
+
−4p11 + p12 −4p12 + p22 −5p12 + 2p22 −4p12 + p22
" # " #
−10p11 + 4p12 −4p11 − 4p12 + 2p22 −1 0
=
−4p11 − 4p12 + 2p22 −8p12 + 2p22 0 −1

−10p11 + 4p12 = −1

−4p11 − 4p12 + 2p22 = 0

−8p12 + 2p22 = −1

p11 = 13 7
p12 = 12 p22 = 11
6
" #
1 7
p= 3 12
7 11
12 6

1
p11 = >0
3

p
11 p12 13 7
12 39
= 7 11
= >0
144

p12 p22 12 6

⇒ satisfies Sylvester’s criterion


⇒ xT px is positive definite function
system is asymptotically stable

v̇ (x) = −xT ϕx

" #" #
h i 1 0 x1
= − x1 x2
0 1 x2

104
PID Controllers Lecture Notes by A. M. Muhia

" #
h i x h i
1
− x1 x2 = − x21 x22
x2
Example
Use Lyapunov method to find conditions for the stability of a linear system described by the state
matrix
" #
−α β
x=
−β −α
Solution
" # " # " # " #
−α β −α −β p11 p12 −1 0
x= xT = p= −ϕ=
−β −α β −α p12 p22 0 −1

 
−ϕ = xT p + px

" #" " # #" # " #


−α −β p11 p12 p11 p12 −α β −1 0
+ =
β −α p12 p22 p12 p22 −β −α 0 −1
" # " # " #
−αp11 − βp12 −αp12 − βp22 −αp11 − βp12 βp11 − αp12 −1 0
+ =
βp11 − αp12 βp12 − αp22 −αp12 − βp22 βp12 − αp22 0 −1
" # " #
−2αp11 − 2βp12 βp11 − 2αp12 − βp22 −1 0
=
βp11 − 2αp12 − βp22 2βp12 − 2αp22 0 −1

−2αp11 − 2βp12 = −1

βp11 − 2αp12 − βp22 = 0

2βp12 − 2αp22 = −1

1
p11 = 2α p12 = 0 1
p22 = 2α
" #
1 0
p= 2α
0 1

1
p11 = >0

105
PID Controllers Lecture Notes by A. M. Muhia

 
p
11
1
p12 2α 0 1 2
= 1
= >0


p12 p22 0 2α

the system is stable as long as α > 0


Example
Consider the second order system described by
" # " #" #
ẋ1 0 1 x1
=
ẋ2 −1 −1 x2
Determine the stability of the system using Lyapunov method
Solution

v (x) = xT px

AT p + pA = −ϕ

" # " # " # " #


0 1 T 0 −1 p11 p12 −1 0
A= A = p= −ϕ=
−1 −1 1 −1 p12 p22 0 −1
" #" # " #" # " #
0 −1 p11 p12 p11 p12 0 1 −1 0
+ =
1 −1 p12 p22 p12 p22 −1 −1 0 −1

−2p12 = −1

p11 − p12 − p22 = 0

2p12 − 2p22 = −1

p11 = 23 p12 = 12 p22 = 1


" #
3 1
p= 2 2
1 1
2

3
p11 = >0
2
" #
p11 p12 3 1
= − >0
p12 p22 2 4

106
PID Controllers Lecture Notes by A. M. Muhia

p is positive definite. Hence the equilibrium state at the origin is asymptotically stable in the large.
Lyapunov function

v (x) = xT px

" #" #
h i 3 1 x1
= x1 2 2
x2 1
2 1 x2

107
PID Controllers Lecture Notes by A. M. Muhia

4 Optimal Control
The objective of optimal control theory is to determine the control signal that will optimize (max-
imize or minimize) some performance criterion while at the same time satisfying the physical con-
straints of the system
E.g to find a control strategy to transfer a system from an initial state to a final state using the
minimum energy
To find a control strategy to transfer a system from an initial state to a final state in minimum
time given the input to the system is limited to a certain value.
To send a rocket to space while minimizing fuel consumption
The formulation of the control problem requires

• A mathematical description (or model) of the process to be controlled

• specifications of the performance criterion

• a statement of physical constraints

4.1 Performance Index

1. Minimum time problem



We intend to transfer the system from an initial state x (t0 )to a specified final state x tf in
minimum time.
The performance index is given by
ˆ t
f
J = tf − t0 = dt
t0

2. Minimum control effort (energy) problem


The rate of expenditure of energy is proportional to u2 (t). So to minimize energy we minimize
ˆ t
f
J= u2 (t) dt
t0

For several control inputs , the equation above takes the form
ˆ t
f
J= uT (t) u (t) dt
t0

To allow for greater generality for weighting different control signals separately, we can write
ˆ t
f
J= uT (t) Ru (t) dt
t0

108
PID Controllers Lecture Notes by A. M. Muhia

where R is a positive definite matrix

3. Tracking Problem
The objective is to maintain the state x (t) as close as possible to the desired state xd (t)
The performance index is
ˆ t
f
J= [x (t) − xd (t)]T ϕ (x (t) − xd (t)) dt
t0

where ϕ is a positive definite matrix

4. State regulator problem


This is a special case of the tracking problem above where xd (t) = 0
The performance index is ˆ t
f
J= xT (t) ϕx (t) dt
t0

5. Terminal control problem


In a terminal control problem, we would like to minimize the deviation of the final state
 
x tf from the desired state xd tf .
The performance measure is
  T   
J = x tf − xd tf H x tf − xd tf

If xd tf = 0

J = xT tf Hx tf
 

6. General optimal control problem


The general performance index is given by
ˆ t
T  f
[x (t) − xd (t)]T ϕ (x (t) − xd (t))+uT (t) Ru (t) dt
   
J = x tf − xd tf H x tf − xd tf +
t0

If xd (t) = 0 then

ˆ t
T f
xT (t) ϕx (t) + uT (t) Ru (t)
 
J =x tf Hx tf + dt
t0

4.2 Solution to the Optimal Control problem

There are three methods of solving Optimal Control problems

1. Calculus of variations

109
PID Controllers Lecture Notes by A. M. Muhia

2. The minimum principle

3. Dynamic programming

Calculus of Variation

Basic variation problem

Let x (t) be a scalar function with continuous first derivatives


The basic variational problem is to find the optimal x (t) (usually denoted by x∗ (t) for which)
ˆ t
f
J (x (t)) = v (x (t) , x∗ (t) , t) dt
t0

has an optimum value (a minimum or maximum)

Calculus of variation and Optimal Control

Deals with finding the optimum value of a function

Basic Concepts

Function and Functional

Function A variable x is a function of a variable quantity t written as x (t) = f (t) , if to every


value of t over a certain range of t , there corresponds a value x i.e we have a correspondent to a
number t , there corresponds a number x . e.g

x (t) = 2t2 + 1

for t = 1 x=3, t=2 x = 9 e.t.c

Functional A variable quantity J is a functional department on a function f (x), written as


J = J f (x) if to each function f (x) ,there corresponds a value J i.e for the function f (x)there
corresponds a number J e.g
x (t) = 2t2 + 1
ˆ 1
J x (t) = x (t) dt
0

ˆ 1
= 2t2 + 1 dt
0

5
=
3

110
PID Controllers Lecture Notes by A. M. Muhia

Increment

Increment of a function The increment of a function f is denoted by ∇f and is given by

∇f ' f (t + ∇t) − f (t)

Increment of a functional The increment of a functional J denoted by 4J is denoted by

4J ' J (x (t) + δx (t)) − J x (t)

δx (t) is called the variation of the function

The basic variation problem

Let x (t) be a scalar function with continuous first derivatives


The basic variational problem is to find the optimal x (t) usually denoted as x∗ (t) for which the
functional ˆ t
f
Jx (t) = v (x (t) , ẋ (t) , t) dt
t0

has an optimum value (minimum or maximum)


It is assumed that the integrand v has continuous first and second derivative w.r.t all its arguments
The necessary condition for optimality is that

∂v (x∗ (t) , ẋ∗ (t) , t) d ∂v (x∗ (t) , ẋ∗ (t) , t)


 

∂x dt ∂ ẋ

i.e  
∂v d ∂v
− =0
∂x dt ∂ ẋ
 
for all t t0 , tf
This is known as the Euler-Lagrange equation
Compliance with the Euler-Lagrange equation is only a necessary condition for optimum. Op-
timality may sometimes not yield a max or min , just an inflection point where the derivative
vanishes.
However, if the Euler-Lagrange equation is not satisfied, for any function , this indicates that the
optimum does not exist for the functional
Example
Find ẋ (t) which minimize the cost function

ˆ 2h i
J= 2x2 (t) + 2x (t) ẋ (t) + ẋ2 (t) dt
0

111
PID Controllers Lecture Notes by A. M. Muhia

that satisfies the boundary condition x (0) = 0 x (2) = 1


Solution
From the above function

v (x (t) , ẋ (t) , t) = 2x2 (t) + 2x (t) ẋ (t) + ẋ2 (t)

∂v
= 4x (t) + 2ẋ (t)
∂x

∂v
= 2x (t) + 2ẋ (t)
∂ ẋ
 
d ∂v
= 2ẍ (t) + 2ẋ (t)
dt ∂ ẋ
 
∂v d ∂v
− =0
∂x dt ∂ ẋ

⇒ 4x (t) + 2ẋ (t) − 2ẍ (t) − 2ẋ (t) = 0

4x (t) − 2ẍ (t) = 0

ẍ (t) − 2x (t) = 0

Solving the differential equation


characteristic equation
m2 − 2 = 0

√ √ √
m=± 2 m1 = 2 m2 = − 2

√ √
x∗ (t) = c1 exp− 2 t +c2 exp 2 t

where c1 and c2 are constants


The boundary conditions are substituted to obtain the values of c1 and c2
x (0) = 0 ⇒ c1 + c2 = 0
√ √
x(2) = 1 ⇒ c1 exp− 2 2 +c2 exp 2 2 = 1

c1 = √ 1 √ c2 = √ −1 √
exp− 2 2 − exp 2 2 exp− 2 2 − exp 2 2

112
PID Controllers Lecture Notes by A. M. Muhia

Example
Find the optimum of

ˆ 2h i
J= ẋ2 (t) + 2t x (t) dt
0
subject to the boundary conditions x (1) = 1 , x (2) = 5
Solution

v (x (t) , ẋ (t) , t) = ẋ2 (t) + 2t x (t)

∂v
= 2t
∂x

∂v
= 2ẋ (t)
∂ ẋ
 
d ∂v
= 2ẍ (t)
dt ∂ ẋ
 
∂v d ∂v
− = 2t − 2ẍ (t) = 0
∂ ẋ dt ∂ ẋ

⇒ ẍ (t) = t

t2
ẋ (t) = + c1
2

t3
x (t) = + c1 t + c2
6
Substituting the boundary conditions

x (0) = 1 c2 = 1

8
x (2) = 5 5= + 2c1 + 1
6

4
c1 =
3

t3 4
ẋ (t) = + t+1
6 3

113
PID Controllers Lecture Notes by A. M. Muhia

Functional involving several Independent functions

Consider the functional


ˆ t
f
J (x1 , x2 , . . . xn ) = v [(x1 (t) , . . . xn (t)) , ẋ1 (t) . . . ẋn (t) , t] dt
t0

The above equation can be written in a more compact format as


ˆ t
f
J x (t) = v [x (t) , ẋ (t) , t] dt
t0

where  
x1 (t)
 x2 (t) 
 
x (t) =  . 

 .. 

xn (t)
 d  
ẋ1 (t) dt x1 (t)
 d
 ẋ2 (t)   dt x2 (t) 
 
ẋ (t) = 
 ..  
 =  .. 

 .   . 
ẋn (t) d
dt xn (t)

The corresponding Euler Lagrange equation is

∂v (x∗ (t) , ẋ∗ (t) , t) d ∂v (x∗ (t) , ẋ∗ (t) , t)


 
− =0
∂x dt ∂ ẋ

In expanded form

∂v (x∗ (t) , ẋ∗ (t) , t) d ∂v (x∗ (t) , ẋ∗ (t) , t)


 
− =0
∂x1 dt ∂ ẋ1

∂v (x∗ (t) , ẋ∗ (t) , t) d ∂v (x∗ (t) , ẋ∗ (t) , t)


 
− =0
∂x2 dt ∂ ẋ2

....
..

∂v (x∗ (t) , ẋ∗ (t) , t) d ∂v (x∗ (t) , ẋ∗ (t) , t)


 
− =0
∂xn dt ∂ ẋn

or simply

114
PID Controllers Lecture Notes by A. M. Muhia

 
∂v d
− dt ∂v =0
∂x1 ∂ ẋ1

 
∂v d
− dt ∂v =0
∂x2 ∂ ẋ2

..
.
 
∂v d
− dt ∂v =0
∂xn ∂ ẋn

Example
Find the extrema for the functional

ˆ π/4 h i
J (x) = x21 (t) + ẋ22 (t) + ẋ1 (t) ẋ2 (t) dt
0
where the boundary conditions are
" #
" #" # " #
x1 (0) 0 x1 π4 1
=  =
x2 (0) 0 x2 π4 −1
Solution

v (x (t) , ẋ (t) , t) = x21 (t) + ẋ22 (t) + ẋ1 (t) ẋ2 (t)

 
∂v ∂v d ∂v
= 2x1 (t) ; = ẋ2 (t) ; = ẍ2 (t)
∂x1 ∂ ẋ1 dt ∂ ẋ1
 
∂v ∂v d ∂v
= 0; = 2ẋ2 (t) + ẋ1 (t) ; = 2ẍ2 (t) + ẍ1 (t)
∂x2 ∂ ẋ2 dt ∂ ẋ2
 
∂v d ∂v
− =0 ⇒ 2x1 (t) − ẍ2 (t) = 0
∂x1 dt ∂ ẋ1

 
∂v d ∂v
− =0 ⇒ 2ẍ2 (t) + ẍ1 (t) = 0
∂x2 dt ∂ ẋ2

4x1 (t) − 2ẍ2 (t) = 0

ẍ1 (t) + 2ẍ2 (t) = 0

ẍ1 (t) + 4x1 (t) = 0

115
PID Controllers Lecture Notes by A. M. Muhia

x1 (t) = c1 cos 2t + c2 sin 2t

ẍ2 (t) = 2x1 (t) = 2c1 cos 2t + 2c2 sin 2t

ẋ2 (t) = c1 sin 2t − c2 cos 2t + c3

1 1
x2 (t) = − c1 cos 2t − c2 sin 2t + c3 t + c4
2 2
Substituting the boundary conditions

x1 (0) = 0 c1 = 0 x2 (0) = 0 c4 = 0

π π −2
x1 =1 c2 = 1 x2 = −1 c3 =
4 4 π

x1 (t) = sin 2t

−1 2
x2 (t) = sin 2t − t
2 π

The Lagrange Multiplier

The elimination method (direct method) used previously gets tedious for higher order problems
The Lagrange multiplier method on the other hand adjoins the constraint to the original function
and the adjoined function is extremized in the usual way
Consider the extreme of the function f (x1 , x2 )subject to the condition g (x1 , x2 ) = 0
We form an augmented Lagrangian function

L (x1 , x2 , λ) = f (x1 , x2 ) + λg (x1 , x2 )

A necessary condition for extreme is that

df = dL = 0

   
∂f ∂g ∂f ∂g
+λ dx1 + +λ dx2 = 0
∂x1 ∂x1 ∂x2 ∂x2
Example

116
PID Controllers Lecture Notes by A. M. Muhia

In a 2 dimension space , find the point on the line x1 + x2 = 5 that is nearest to the origin
Solution

Elimination method The square of the distance from the origin is given by

f (x1 , x2 ) = x21 + x22

This function is to be minimized subject to the condition

x1 + x2 = 5

x1 = 5 − x2

f (x2 ) = (5 − x2 )2 + x22 = 2x22 − 10x2 + 25

∂f (x2 )
= 4x2 − 10 = 0
∂x2

x∗2 = 2.5 x∗1 = 2.5

∂ 2 f (x2 )
=4>0
∂x22

this is a minimum point. It follows that the minimum distance from the origin is 5/ 2

The Lagrange Multiplier method

L (x1 , x2 , λ) = x21 + x22 + (x1 + x2 − 5)

For the values of x1 and x2 that satisfy the constraint , the function λ (x1 + x2 − 5) equals zero
and so we have simply added a zero to the function to be minimized
Taking partial derivative of the function w.r.t to all the variables and equating to zero

∂L
= 2x1 + λ = 0
∂x1

∂L
= 2x2 + λ = 0
∂x2

∂L
= x1 + x2 − 5 = 0
∂λ

117
PID Controllers Lecture Notes by A. M. Muhia

solving simultaneously we obtain

x∗1 = 2.5 x∗2 = 2.5 λ∗ = 5

Example
A manufacture would like to package his products in cylindrical containers. The cost of each empty
container is proportional to the area of the material that is used to manufacture the can i.e the
cost of each container is proportional to its surface area. The manufacturer would like to maximize
the volume of the cylindrical container for a given surface area A0 . Assuming the radius of the
container is r and the height h ,determine the relationship between the radius r and the height h
that maximizes the volume of the container subject to the constraint that the surface area of the
constraint is a constant A0
Solution

Elimination method We maximize the volume

v (r, h) = πr2 h

subject to the surface area

A (r, h) = 2πr2 + 2πrh = A0

we eliminate one of the variables and maximize the resulting function

A − 2πr2 A
h= = −r
2πr 2πr

v = πr2 h

1
= Ar − πr3
2
hence
dv 1
= A − 3πr2
dr 2

r
A
r=

r r
4A 2A
h= =
6π 3π
The second derivation is

118
PID Controllers Lecture Notes by A. M. Muhia

d2 v
= −6πr < 0
dr2
Hence this is a maximum point

The Lagrange Multiplier method The constraint is written as

2πr2 + 2πrh − A0 = 0

 
L (r, h) = πr2 h + λ 2πr2 + 2πrh − A0

The function L is known as the Lagrangian which is a function of 3 variables: r , h , and λ


Taking partial derivative of L w.r.t each of these variables and equating to zero

∂L
= 2πrh + λ (4πr + 2πh) = 0
∂r

∂L
= πr2 + 2πrλ
∂h

∂L
= 2πr2 + 2πrh − A0 = 0
∂λ
solving the above equation simultaneously gives
r r
A 2A
r= h=
6π 3π
for a given surface area the volume is maximized if the height is twice the radius

The Lagrange Multiplier


Consider the problem of finding the extrema of a real valued function f (x) = f (x1 , x2 , . . . xn )
subject to the conditions

g1 (x) = g1 (x1 , x2 , . . . xn ) = 0

g2 (x) = g2 (x1 , x2 , . . . xn ) = 0

..
.

gm (x) = gm (x1 , x2 , . . . xn ) = 0

119
PID Controllers Lecture Notes by A. M. Muhia

where f and g are continuous derivatives and m < n


Let λ1 , λ2 ,. . . . . .λn be the Lagrange multipliers corresponding to the m conditions
The Lagrange function is

L (x1 , x2 , . . . xn , λ1 , λ2 , . . . λm ) = f (x1 , x2 , . . . xn )+λ1 g1 (x1 , x2 , . . . xn )+· · ·+λm gm (x1 , x2 , . . . xn )

 
g1 (x1 , x2 , . . . xn )
 g2 (x1 , x2 , . . . xn ) 
 
= f (x1 , x2 , . . . xn ) + (λ1 , λ2 , . . . λm ) 
 .. 

 . 
gm (x1 , x2 , . . . xn )

L (x, λ) = f (x) + λT g (x)

The optimal values x∗ and λ∗ are solutions to the following (n + m) equations

∂L ∂f ∂g1 ∂g2 ∂gm


= + λ1 + λ2 + · · · + λm =0
∂x1 ∂x1 ∂x1 ∂x1 ∂x1

∂L ∂f ∂g1 ∂g2 ∂gm


= + λ1 + λ2 + · · · + λm =0
∂x2 ∂x2 ∂x2 ∂x2 ∂x2

..
.

∂L ∂f ∂g1 ∂g2 ∂gm


= + λ1 + λ2 + · · · + λm =0
∂xn ∂xn ∂xn ∂xn ∂xn

∂L
= g1 (x1 , x2 , . . . xn ) = 0
∂λ1

∂L
= g2 (x1 , x2 , . . . xn ) = 0
∂λ2

..
.

∂L
= gm (x1 , x2 , . . . xn ) = 0
∂λm
or more compactly
∂L ∂f ∂g
= + λT =0
∂x ∂x ∂x

120
PID Controllers Lecture Notes by A. M. Muhia

∂L
= g (x) = 0
∂λ
NB:
The introduction of the Lagrange multiplier allows us to treat all variables in the augmented
function L (x1 , x2 , . . . xn , λ1 , λ2 , . . . λm ) as though they are independent variables
The Lagrange multiplier makes it easy to solve the problems of constrained extremization but its
value is not important

Extreme of functional with conditions

Consider the extremization of the performance index in the form of a functional

ˆ t
f
T (x1 , x2 , . . . xn , t) = v (x1 , x2 , . . . xn , ẋ1 , ẋ2 , . . . ẋn , t) dt
t0

subject to the boundary conditions (or plant system equations)

g1 (x1 , x2 , . . . xn , ẋ1 , ẋ2 , . . . ẋn , t) = 0

g2 (x1 , x2 , . . . xn , ẋ1 , ẋ2 , . . . ẋn , t) = 0

..
.

gm (x1 , x2 , . . . xn , ẋ1 , ẋ2 , . . . ẋn , t) = 0

we form the augmented functional

ˆ t
f
J= L (x1 , x2 , . . . xn , ẋ1 , ẋ2 , . . . ẋn , λ1 (t) , λ2 (t) , . . . λm (t) , t) dt
t0

where λs are the Lagrange multipliers and the Lagrangian is defined as

L (x1 , x2 , . . . xn , ẋ1 , ẋ2 , . . . ẋn , t) = v (x1 , x2 , . . . xn , ẋ1 , ẋ2 , . . . ẋn , t)


+λ1 (t) g1 (x1 , x2 , . . . xn , ẋ1 , ẋ2 , . . . ẋn , t) + · · · · · · + λm (t) gm (x1 , x2 , . . . xn , ẋ1 , ẋ2 , . . . ẋn , t)

The necessary condition for extremization of the functional subject to the conditions are
 
∂L d ∂L
− =0
∂x dt ∂ ẋ

121
PID Controllers Lecture Notes by A. M. Muhia

 
∂L d ∂L
− =0
∂λ dt ∂ λ̇

In Control Systems

Applying the results in control systems


Consider the problem of extremizing the functional

ˆ t
f
J (x, u, t) = v (x, u, t)
t0

subject to the plant equation

ẋ (t) = f (x (t) , u (t) , t)

where    
x1 (t) u1 (t)
 x2 (t)   u2 (t) 
   
x (t) = 
 .. 
 and u (t) = 
 .. 

 .   . 
xn (t) um (t)
The optimization problem is to extremize the functional

ˆ t
f
J (x1 , x2 , . . . xn , u1 , u2 , . . . um , t) = v(x1 , x2 , . . . xn , u1 , u2 , . . . um ,
t0

ẋ1 , ẋ2 , . . . ẋn , u̇1 , u̇2 , . . . u̇m , t)dt

subject to the constraints

f1 (x1 , x2 , . . . xn , u1 , u2 , . . . um , t) − ẋ1 (t) = 0

f2 (x1 , x2 , . . . xn , u1 , u2 , . . . um , t) − ẋ2 (t) = 0

..
.

fn (x1 , x2 , . . . xn , u1 , u2 , . . . um , t) − ẋn (t) = 0

The Lagrangian is given by

L (x1 , x2 , . . . xn , u1 , u2 , . . . um , ẋ1 , ẋ2 , . . . ẋn , u̇1 , u̇2 , . . . u̇m , λ1 , λ2 , . . . λm , t)

122
PID Controllers Lecture Notes by A. M. Muhia

= v (x1 , x2 , . . . xn , u1 , u2 , . . . um , ẋ1 , ẋ2 , . . . ẋn , u̇1 , u̇2 , . . . u̇m , t) +


λ1 (t) f1 (x1 , x2 , . . . xn , u1 , u2 , . . . um , t) − ẋ1 (t) + · · · +
λm (t) fm (x1 , x2 , . . . xn , u1 , u2 , . . . um , t) − ẋm (t)

The necessary conditions for extremization become

 
∂L d ∂L
− =0
∂x1 dt ∂ ẋ1
..
.
 
∂L d ∂L
− =0
∂xn dt ∂ ẋn

 
∂L d ∂L
− =0
∂u1 dt ∂ u̇1
..
.
 
∂L d ∂L
− =0
∂um dt ∂ u̇m

 
∂L d ∂L
− =0
∂λ1 dt ∂ λ̇1
..
.
 
∂L d ∂L
− =0
∂λm dt ∂ λ̇m

Example
Suppose that the system described by the state equations

ẋ1 (t) = x1 (t)

ẋ2 (t) = u (t)

is to be controlled to minimize the performance index

ˆ
1 2 2
J (x1 , x2 , u) = u (t) dt
2 0

123
PID Controllers Lecture Notes by A. M. Muhia

with boundary conditions " # " #


x1 (0) 1
=
x2 (0) 2
and " # " #
x1 (2) 1
=
x2 (2) 0

find the optimal value of x1 (t) , x2 (t) and u (t)


Solution

ẋ1 (t) − x2 (t) = 0

ẋ2 (t) − u (t) = 0

The Lagrangian is given by

1 2
u (t) − λ1 (t) [ẋ1 (t) − x2 (t)] + λ2 (t) [ẋ2 (t) − u (t)]
2

 
∂L ∂L d ∂L
=0 = λ1 (t) = λ̇1 (t) ⇒ −λ̇1 (t) = 0
∂x1 ∂ ẋ1 dt ∂ ẋ1

 
∂L ∂L d ∂L
= −λ̇1 (t) = λ2 (t) = λ̇2 (t) ⇒ −λ̇1 (t) − λ̇2 (t) = 0
∂x2 ∂ ẋ2 dt ∂ ẋ2

 
∂L ∂L d ∂L
= u (t) − λ2 (t) =0 =0 ⇒ −u (t) − λ2 (t) = 0
∂u ∂ u̇ dt ∂ u̇

 
∂L ∂L d ∂L
= ẋ1 (t) − x2 (t) =0 =0 ⇒ ẋ1 (t) − x2 (t) = 0
∂λ1 ∂ λ̇1 dt ∂ λ̇1

 
∂L ∂L d ∂L
= ẋ2 (t) − u (t) =0 =0 ⇒ ẋ2 (t) − x2 (t) = 0
∂λ2 ∂ λ̇2 dt ∂ λ̇2

Solving the simultaneous equations

λ1 (t) = c1

124
PID Controllers Lecture Notes by A. M. Muhia

λ2 (t) = −c1 t + c2

u (t) = −c1 t + c2

1
x2 (t) = − c1 t2 + c2 t + c3
2

1 1
x1 (t) = − c1 t3 + c2 t2 + c3 t + c4
6 2
substituting the boundary conditions

x1 (0) = 1 ⇒ c4 = 1

x2 (0) = 2 ⇒ c3 = 2

x2 (2) = 0 − 2c1 + 2c2 = −2


4
x1 (2) = 1 − c1 + 2c2 + 4 + 1 = 1
3
4
− c1 + 2c2 = −4
3
− 2c1 + 2c2 = −2
c1 = −3 c2 = −4

The optimal values of x1 (t) , x2 (t) and u (t) are

t3
x∗1 (t) = − 2t2 + 2t + 1
2
3t2
x∗2 (t) = − 4t + 2
2
u∗ (t) = 3t − 4

The Hamiltonian

The Hamiltonian H is defined as

125
PID Controllers Lecture Notes by A. M. Muhia

H (x (t) , u (t) , λ (t) , t) = v (x (t) , u (t) , t) + λT [f x (t) , u (t) , t]

The above function is also known as the Pontryazin H function


Consider the problem of extremizing the functional

ˆ t
f
J (x, u, t) = v (x, u, t)
t0

subject to the plant equation

ẋ (t) = f (x (t) , u (t) , t)

The optimal control u∗ (t) and optimal state x∗ (t) can be obtained by solving simultaneously the
equations


ẋ (t) = [H (x (t) , u (t) , λ (t) , t)] − state equation
∂x


λ̇ (t) = [H (x (t) , u (t) , λ (t) , t)] − co − state equation
∂x


0= [H (x (t) , u (t) , λ (t) , t)] − control equation
∂u
Example
Given a second order system

ẋ1 (t) = x2 (t)

ẋ2 (t) = u (t)

and performance index

ˆ
1 2 2
J= u (t) dt
2 0
find the optimal control u∗ (t) and optimal state x∗1 (t) and x∗2 (t) given the boundary condition
" # " # " # " #
x1 (0) 1 x1 (2) 1
= and =
x2 (0) 2 x2 (2) 0
Solution
The Hamiltonian is given by

126
PID Controllers Lecture Notes by A. M. Muhia

1 2
H= u (t) + λ1 (t) x2 (t) + λ2 (t) u (t)
2

∂H
ẋ1 (t) = = x2 (t)
∂λ1

∂H
ẋ2 (t) = = u (t)
∂λ2

−∂H
λ̇1 (t) = =0
∂x1

−∂H
λ̇2 (t) = = −λ1 (t)
∂x2

∂H
0= = u (t) + λ2 (t)
∂u
solving the equations

λ1 (t) = c1

λ2 (t) = −c1 t + c2

u (t) = c1 t − c2

1 2
x2 (t) = c1 t − c2 t + c3
2

1 3 1 2
x1 (t) = c1 t − c2 t + c3 t + c4
6 2
substituting the boundary conditions we obtain

c1 = 3 c2 = 4 c3 = 2 c4 = 1

it implies

λ∗1 (t) = 3

λ∗2 (t) = −3t + 4

127
PID Controllers Lecture Notes by A. M. Muhia

u∗ (t) = 3t − 4

3 2
x∗2 (t) = t − 4t + 2
2

t3
x∗1 (t) = − 2t2 + 2t + 1
2

4.3 Linear Quadratic Optimal Control

Problem Formulation

We consider Linear Time Invariant systems


Consider a system described by the linear state equation

ẋ (t) = Ax (t) + Bu (t)

The performance index is given by

ˆ
1 τ  1 tf τ
x (t) ϕx (t) + uτ (t) Ru (t) dt

J= x tf F x tf +
2 2 t0

where F and ϕ are real symmetric positions semi definite matrices where R is real symmetric
positive definite matrix
In this control problem problem it is desired to maintain the state vector close to the origin without
an excessive expenditure of the control effort.
This is known as the Linear Quadratic Regular (LQR)
The Hamiltonian is given by

1 τ 1
H (x (t) , u (t) , λ (t) , t) = x ϕx (t) + uτ (t) Ru (t) + λτ (t) Ax (t) + λτ (t) Bu (t)
2 2
Applying the identities

∂ ∂
(xτ Ay) = (y τ Aτ x) = Aτ x
∂y ∂y
and


(xτ Ax) = Ax + Aτ x
∂x
and noting that matrices ϕ and R are symmetric, the necessary conditions for optimality are

∂H
ẋ∗ (t) = = Ax∗ (t) + Bu∗ (t)
∂λ

128
PID Controllers Lecture Notes by A. M. Muhia

−∂H
λ̇∗ (t) = = −ϕx∗ (t) − Aτ λ∗ (t)
∂x

∂H
0= = Ru∗ (t) + B τ λ∗ (t)
∂u
solving for u∗ (t) we obtain

u∗ (t) = −R−1 B τ λ∗ (t)

ẋ∗ (t) = Ax∗ (t) + Bu∗ (t)

= Ax∗ (t) − BR−1 B τ λ∗ (t)

Solution for the LQR problem

λ∗ (t) and x∗ (t) are related


Assuming that they are related by the transformation

λ∗ (t) = p (t) x∗ (t)

u∗ (t) = −R−1 B τ λ∗ (t)

= −R−1 B τ p (t) x∗ (t)

The state equation becomes

ẋ∗ (t) = Ax∗ (t) − BR−1 B τ λ∗ (t)

= Ax∗ (t) − BR−1 B τ p (t) x∗ (t)

h i
= A − BR−1 B τ p (t) x∗ (t)

The co-state equation becomes

λ∗ (t) = −ϕx∗ (t) − Aτ p (t) x∗ (t)

Differentiating w.r.t time

λ̇∗ (t) = ṗ (t) x∗ (t) + p (t) ẋ∗ (t)

129
PID Controllers Lecture Notes by A. M. Muhia

substituting for ẋ (t) and λ̇∗ (t) into above equation

h i
−ϕx∗ (t) − Aτ p (t) x∗ (t) = ṗ (t) x∗ (t) + p (t) Ax∗ (t) − BR−1 B τ (t) p (t) x∗ (t)

h i
⇒ (ṗ (t) + p (t) A + Aτ p (t) + ϕ (t)) − p (t) BR−1 B τ (t) p (t) x∗ (t) = 0

⇒ ṗ (t) + p (t) A + Aτ p (t) + ϕ (t) − p (t) BR−1 B τ (t) p (t) = 0

This is known as the matrix Differential Ricatti equation


The transformation is known as the Ricatti transformation and p (t) the Ricatti coefficient matrix
(Ricatti matrix)

Infinite time Horizon

As t → ∞ the matrix p (t) settles to constant value since ṗ (t) → 0 and the differential Ricatti
equation reduces to

P A + Aτ P + ϕ − P BR−1 B τ P = 0

This is known as the Algebraic Ricatti equation


The optimal control u∗ (t) is given by

u∗ (t) = −R−1 B τ P x∗ (t)

and the state equation

ẋ∗ (t) = Ax∗ (t) − BR−1 B τ P x∗ (t)

h i
= A − BR−1 B τ P x∗ (t)

Example
Find the control law u∗ (t) which minimizes the performance index
ˆ
1 ∞ 2 
J= x1 (t) + x22 (t) + u2 (t) dt
2 0
For the system

ẋ1 (t) = x2 (t)

130
PID Controllers Lecture Notes by A. M. Muhia

ẋ2 (t) = u (t)

Solution
The performance index is given by

ˆ " " #" # #


1 ∞ 1 0 x1 (t) h i
J= x1 (t) x2 (t) + u (t) 1 u (t) dt
2 0 0 1 x1 (t)
while the state space representation of the system is given by
" # " #" # " #
ẋ1 (t) 0 1 x1 (t) 0
= + u (t)
ẋ2 (t) 0 0 x1 (t) 1

" # " # " #


0 1 0 1 0
⇒ A= B= F =0 ϕ= R=1
0 0 1 0 1

Let " #
p11 p12
p=
p12 p22
substituting these values into the Algebraic Ricatti equation

" #" # " #" # " # " #" # " #


p11 p12 0 1 0 0 p11 p12 1 0 p11 p12 0 h i−1 h i p
11 p12
+ + − 1 0 1
p12 p22 0 0 1 0 p12 p22 0 1 p12 p22 1 p12 p22

" #
0 0
=
0 0
which simplifies to
" # " #
−p212 + 1 p11 − p12 p22 0 0
=
p11 − p12 p22 2p12 − p222 + 1 0 0
solving the simultaneous equations

−p212 + 1 = 0

p11 − p12 p22 = 0

2p12 − p222 + 1 = 0

131
PID Controllers Lecture Notes by A. M. Muhia

p must be real , positive definite matrix


"√ #
3 1
p= √
1 3

u∗ (t) = −R−1 B τ P x∗ (t)

i √3
" #" #
h i−1 h 1 x∗1 (t)
=− 1 0 1 √
1 3 x∗2 (t)
" #
h √ i x∗1 (t)
=− 1 3
x∗2 (t)

132

Potrebbero piacerti anche