Sei sulla pagina 1di 7

PID vs. Linear Control -- Really?

Page 1 of 7

PID vs. Linear Control -- Really?


I have seen the arguments. State space models are superior. PID is all you really need, so why
mess with state space? And so on and on. If you have no system model, and are not likely to get
one, there is little choice and you need a heuristic controller, such as PID or a fuzzy rule-based
controller. For the case where you have a rather good understanding of the system, including a
state model, but PID controllers are sufficient for an implementation, it seems a shame that the
two schemes are incompatible... or are they?
Check out the following. I have never seen an analysis of this sort in print elsewhere. It may be
old business. If this is the case I apologize in advance.

Augmenting the System Model


Assume that you have a state space system model of your system, with state vector x, a singlevariable control v applied through system input u, continuous state matrix a, input coupling matrix
b, and observation matrix c to observe the relevant system output y.
// System model
x' = a x + b u
y = c x

We will consider the input u to consist of two parts: a setpoint driving term s, and a feedback term
v. The input terms in variable u are treated as separate terms initially, with one input coupling
matrix bf for the feedback v, and another one bs for the setpoint s. For purposes of simulation, we
might also include a third input variable and coupling matrix to represent a class of disturbances
for simulation. This temporary separation will make it a little easier to think about the setpoint and
feedback signals separately; the notation can be unified later.
Looking ahead, we know that PID control rule will need to compute the differences between the
observed output variable and the setpoint, so an additional setpoint coupling vector d (not yet
defined) is reserved to make the setpoint variable visible in the observation equation.
// Reorganized system model
x' = a x + bf u + bs s
y = c x
+ d s

A PID controller (in parallel form) applies three control rules to perform the its computations.
Each of these rules observes variables dependent on setpoint and system state.
proportional control
Proportional feedback responds in proportion to the difference between the desired output
(setpoint) and the observed system output. The current values of the observed variable y and the
setpoint variable s are needed. A proportional feedback rule is:
// proportional feedback
v = -kp (y - s) = -kp ( c x - s )

where kp is the proportional gain setting of the PID controller.

http://home.earthlink.net/~ltrammell/tech/pidvslin.htm

17/09/2013

PID vs. Linear Control -- Really?

Page 2 of 7

derivative control
The derivative part of PID controller is somewhat fictional. A real PID controller observes
changes in its signals and from this estimates what the derivative must be.
If you have an exact model of your PID controller internals, it should be possible to use it directly;
but most likely you do not. The formulation used here uses the derivative of the observed output
variable rather than the derivative of the tracking error. Since the tracking error is the difference
between the output variable and the setpoint, the two kinds of derivative are the same for a
constant setpoint level. You might not have a good way to model setpoint changes, and the
setpoint signal might be non-differentiable. (PID controllers are famous for giving the system a
severe jolt through the derivative feedback term when the setpoint level is changed quickly.)
Omitting the direct coupling to the setpoint variable eliminates this problem.
Even after this adjustment, the derivative estimate remains sensitive to high frequency noise.
Lowpass filtering is typically applied to limit bandwidth. These details might be unspecified for
your PID control equipment. Even if you don't know the exact processing that your PID controller
equipment uses, you know that it should reasonably (though not perfectly) track the derivatives
that appear in your system model.
After all of these disclaimers, the model must now set up its derivative estimate, using information
available in the state-space model. Ignoring the s in the observation equation for reasons just
discussed, the derivative of the output variable is:
y

c x

y' = c x' = c a x

c bf u

c bs s

Then the derivative feedback rule will have the form


//derivative feedback
v = -kd y' = -kd (c a x

c bf u

c bs s)

where kd is the derivative gain setting of the PID controller. This expression sometimes is not
quite right. After substituting feedback v into the system input variable u, the feedback variable v
appears on both sides of this expression.
//derivative feedback
v = -kd (c a x + c bf v

c bs s)

An algebraic reduction can combine the two v terms. Define the algebraic factor Kg and use it to
simplify the derivative feedback expression above.
Kg
v

=
=

1 /[1 + kd c bf]
-kd Kg (c a x

c bs s)

For many and possibly most systems, the feedback v does not couple directly into the output
variable y and for this common case the c bf v product evaluates to zero. The Kg term
reduces to a value of 1 and can be ignored.
integral control
The PID controller integrates the difference between output y and the setpoint level s over time.
Augment the system equations with an additional artificial state variable z to represent the integral
state. Include this as an extra row in the state equations.

http://home.earthlink.net/~ltrammell/tech/pidvslin.htm

17/09/2013

PID vs. Linear Control -- Really?

z'

y - s

Page 3 of 7

c x - s

The integral feedback rule is then


//integral feedback
v = -ki z

and ki is the integral gain setting.


The system model, augmented with the additional PID integrator, is now as follows.
X

|
|

x
z

|
|

|
|

v
s

|
|

|
|

a
c

0
0

|
|

B =

| b 0 |
| 0 -1 |

// Augmented system model in original variables


x' = a x + 0 z + bf v + bs s
z' = c x + 0 z + 0 v
- 1 s
// Augmented system model
X' = A X + B U

Observed Variables and PID Feedback


We now have everything we need, but we must collect the observed variables before applying the
PID control rule.
yp
yi
yd
y

=
=
=
=

c x - s
z
Kg c a x +
c x

Kg

c bs s

//
//
//
//

proportional error
integral of proportional error
output derivative
the original output variable

The expanded observation equations can be reorganized as a matrix expression with separate stateand setpoint-related terms.
Y

|
|
|
|

yp
yi
yd
y

|
|
|
|

| c
| 0
| Kg c a
| c

| -1
|
|
0
|
| Kg c bs |
|
0
|

C x

0
1
0
0

|
|
|
|

D s

Now the PID feedback can be computed. PID feedback is a weighted sum of the P, I and D control
rules, with adjustable gain parameters kp, ki and kd.

http://home.earthlink.net/~ltrammell/tech/pidvslin.htm

17/09/2013

PID vs. Linear Control -- Really?

Page 4 of 7

-( kp yp

ki yi

kd yd )

Define the gain vector Kpid as follows, and then the PID computations can be represented in
matrix form.
Kpid

Kpid Y

[ kp

ki
=

kd

0 ]

Kpid ( C x

D s )

Augmented equation summary


The complete augmented model is as follows:
states
X

|
|

x
z

|
|

|
|

v
s

|
|

inputs
U

state equations
X'

=
A

A X
=

B u

|
|

a
c

|
|

bf
0

0
0

|
|

bs
-1

|
|

0
1
0
0

|
|
|
|

observation equations
Kg
Y

1 /[1 + kd c bf]

C X

D U

|
|
|
|

c
0
Kg c a
c

|
|
|
|

0
0
0
0

-1
0
Kg c bf
0

|
|
|
|

PID feedback rule


u

= -Kpid Y
Kpid =

[ kp

ki

kd

0 ]

We have just obtained a state space model of a system under PID control. This is a
matter of notation, not control theory. Because it is clear that this represents a PID
control, and that it is a state space representation, there is no theoretical need to
choose between PID or linear control theory. That choice might be need to be made

http://home.earthlink.net/~ltrammell/tech/pidvslin.htm

17/09/2013

PID vs. Linear Control -- Really?

Page 5 of 7

because of other practical restrictions: the model is too difficult to identify, the
practical difficulties of deploying a state-space controller, etc.

An Example
A hypothetical system is constructed deliberately to be extremely difficult for a PID controller, so
that the simulation always has something to show regardless of the gain settings. The problem is
to cancel out observed displacements in one of the variables, by controlling an input that drives
another variable. The two variables change out of phase, consequently the PID controller might
need "negative feedback gains" of the sort that would drive ordinary systems straight to instability.
The desired disturbance level is 0, so the setpoint variable s is 0 and the bs terms are otherwise
unused. For this simulation, the spare bs vector is used artificially to insert a simulated
disturbance.
Here is the original system model, with the third state variable observed for feedback, while inputs
drive the fourth state variable.
sysF = ...
[ 0.0
0.0
0.0
0.0
-0.052 0.047
0.047 -0.052

1.0
0.0
-0.01
0.0

0.0;
1.0;
0.0;
-0.04

...
...
...
];

sysB = [ 0.0;

0.0;

0.0;

0.01 ];

obsC = [ 0.0;

0.0;

1.0;

0.0 ];

There is a PID controller, separately modeled, with gains


kp = 2.0; ki = -.50; kd = -15.0;

This is simulated with time-step delT = 0.5 through 250 steps, using a trapezoidal rule
integration for the system state model and rectangular rule integration for the PID integral term.
The following plot shows the state 3 that we would like to regulate to 0. The green trace is without
feedback control, and the blue trace is using PID control.

http://home.earthlink.net/~ltrammell/tech/pidvslin.htm

17/09/2013

PID vs. Linear Control -- Really?

Page 6 of 7

So now the problem is reformulated to include the PID controller within the state space model.
Because the feedback drives one variable, but output sees a different variable, there is no direct
coupling into the derivative term. The Kg term reduces to 1.0 and can be omitted. Here are the
augmented equations:
sVar = 1.0;
setpt = 0.0;
sysF = [ ...
0.0
0.0
0.0
0.0
-0.052 0.047
0.047 -0.052
0.0
0.0

1.0
0.0
-0.01
0.0
1.0

0.0
1.0
0.0
-0.04
0.0

0.0; ...
0.0; ...
0.0; ...
0.0; ...
0.0 ];

sysBset = [ 0.0;

0.0;

0.0;

0.01;

1.0];

sysBfb

0.0;

0.0;

0.01;

0.0];

1.0
0.0
-0.01
1.0

0.0
0.0
0.0
0.0

= [ 0.0;

obsC = [ ...
0.0
0.0
0.0
0.0
-0.052 0.047
0.0
0.0
obsD = [ -1.0;
pidK = [ 2.0

0.0;
-0.50

0.0;

0.0; ...
1.0; ...
0.0; ...
0.0 ];
0.0 ];

-15.0

0.0 ];

Here is the simulation for the augmented system, recording the state trajectory for later inspection.
sVar = 0;
for i=2:steps
% Current state and observed variables
xstate = hist(:,i-1);
yobs = obsC * xstate + obsD * sVar;
% Feedback law applied to current output
fb(i) = -pidK * yobs;
% Predictor step (Rectangular rule)
deriv = sysF * xstate + sysBset * sVar + sysBfb * fb(i);
xproj = xstate + deriv*delT;
yobs = obsC * xproj + obsD * sVar;
% Corrector step (Trapezoid rule)
dproj = sysF * xproj + sysBset * sVar + sysBfb * fb(i);
xstate = xstate + 0.5*(deriv+dproj)*delT;
hist(:,i) = xstate;
end

Here is the result of this simulation.

http://home.earthlink.net/~ltrammell/tech/pidvslin.htm

17/09/2013

PID vs. Linear Control -- Really?

Page 7 of 7

At first glance, it is a match for the previous simulation. It has captured the essential behaviors of
the PID-controlled system. However, the results of the two simulations do not match exactly, and
we should not expect them to, because of the differences in the internal representations of the
derivative feedback.

Okay, that's the idea. What I don't know is... how well does this work in practice?
Site:
Created:
Contact:
Related:

Larry's Barely Operating Site


http://home.earthlink.net/~ltrammell
Nov 24, 2002
Revised: Dec 15, 2010
Status: Experimental
NOSPAM ltrammell At earthlink DOT net NOSPAM
(none)
Restrictions: This information is public

http://home.earthlink.net/~ltrammell/tech/pidvslin.htm

17/09/2013

Potrebbero piacerti anche