Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Fundamental aspects of modern control theory are covered, including solutions to systems modeled in state-
variable format, controllability, observability, pole placement, and linear transformation. Computer based tools
for control system design are used. Prer., ECE4510 and Math 313 (or equiv).
Instructor: Dr. Gregory Plett Office: EN–290 Phone: 262–3468 email: glp@eas.uccs.edu
Course web-page: http://mocha-java.uccs.edu/ECE4520/
Text: C-T. Chen, Linear System Theory and Design, third edition, Oxford University Press, 1999.
Reference: J.S. Bay, Fundamentals of Linear State Space Systems, WCB/McGraw-Hill, Boston: 1999.
Optional Software: The Matlab Student Version, with Control-Systems toolbox (full windows
version is running in the ECE Multimedia lab). The book that comes with the student software
applies to all versions of Matlab .
Grading: 90-100 A
80-89 B
70-79 C
60-69 D
0-59 F
Homework Policy #1: Homework will be collected at the beginning of class on the assigned date. Homework
turned in after the class period will be penalized 10%. Homework turned in after the due date will be penalized an addi-
tional 25% per day unless previous arrangements have been made with the instructor. Examinations will be based on the
homework problems and the material covered in class. It is to your advantage to understand the fundamental concepts
that are demonstrated in the homework problems. It will be difficult to earn higher than a “C” without performing well
on the homework assignments.
Homework Policy #2: Your homework is expected to be a bona-fide individual effort. Copying homework from
another student is CHEATING and will not be tolerated. You may (and are encouraged to) discuss homework problems
with other students, but only to the extent that you would discuss them with the instructor. Don’t ask another student a
question that you would not expect the instructor to answer. Most of us know when we are compromising our integrity.
If you are in doubt, ask first.
Homework Policy #3: Part of your engineering education involves learning how to communicate technical infor-
mation to others. Basic standards of neatness and clarity are essential to this process of communication. Your process of
solving a problem must be presented in a logical sequence. Consider your assignments to represent your performance
as an engineer. Do not submit scrap paper, and do not submit paper containing scratched out notes. Graphs are to be
titled and axes are to be labeled (with correct units). The above standards of clarity and neatness also apply to your work
on exams.
Attendance: Attendance is your responsibility. Class lectures will cover a significant amount of material. Some
will not be in the text or may be explained differently. It is to your advantage to take notes, ask questions, and to fully
participate in the classroom experience.
Missed Exams: Missed exams will count as ZERO without a physician’s documentation of an illness, or other ap-
propriate documenta tion of an emergency beyond your control and requiring your absence.
Homework Format Rules: Points will be deducted for failure to comply with the following rules:
The Course Reader: These notes have been entered using LYX, and typeset with LATEX2ε on a Pentium-II class
computer running the Linux operating system. All diagrams have been created using either xfig or Matlab.
Some sections of these notes have been adapted from lectures given by Drs. Theresa Meng, Jonathan How and Stephen
Boyd at Stanford University.
(mostly blank)
(mostly blank)
ECE4520/5520: Multivariable Control Systems I. 1–1
• UsefulR(s)
when analyzing systems comprised of a number of sub-units.
U1 (s)
Y2 (s)
U (s) H (s) Y (s) Y (s) = H (s)U (s)
replacements
R(s) U2 (s)
UH1 (s)
Y2 (s)
U (s) H1 (s) H2 (s) Y (s) Y (s) = [H1(s)H2(s)] U (s)
U2 (s)
replacements H1 (s)
R(s)
H
U1U (s)
(s)
(s) Y (s) Y (s) = [H1(s) + H2 (s)] U (s)
Y2 (s)
H2 (s)
U2 (s)
U (s) U1 (s)
R(s) H1 (s) Y (s)
H1 (s)
Y (s) = R(s)
1 + H2(s)H1(s)
H2 (s)
Y2 (s) U2 (s)
1
R(s) H1 (s) Y (s) R(s) H2 (s) H1 (s) Y (s)
H2 (s)
⇐⇒
H2 (s)
“Unity Feedback”
y(t) × K
0.6 0.6
1
h(t)
←−
1 e
0.4
←− e−σ t 0.4
e h(t)
(t) × K Response to initial condition
0.2 0.2
−→ 0.
dc gain
ondition 0
0 1 2 3 4 5
0
0 1 2 3 4 5
t =τ t =τ
Time (sec × τ ) Time (sec × τ )
• If a system has complex-conjugate poles, each may be written as:
ωn2
H (s) = 2PSfrag replacements
.
s + 2ζ ωn s + ωn2
We can extract two more parameters from this equation:
p
σ = ζ ωn and ωd = ωn 1 − ζ 2.
(s)
• σ plays the same role as above—it specifies
θ = sin−1 (ζ )
decay rate of the response.
• ωd is the oscillation frequency of the output. ωn (s)
Note: ωd 6= ωn unless ζ = 0. ωd
σ
• ζ is the “damping ratio” and it also plays a
role in decay rate and overshoot.
• Impulse response h(t) = ωn e−σ t sin(ωd t) 1(t).
σ
• Step response y(t) = 1 − e −σ t cos(ωd t) + sin(ωd t) .
ωd
y(t)
0 0.8 1
0.9 0.9
ζ =1 0.8 ζ =1
0.7 0.7
0.8
0.9 −0.5 0.9 0.5 0.8
1.0
1.0
Impulse
−1
0
Responses
2 4
of
6
2nd-Order
8 10
Systems
12
0
0 2 4 6 8 10 12
stems ωn t ωn t
ements
PSfrag replacements
PSfrag replacements
(s)
(s)
0.1
tr t
ts
M p, %
• Settling time ts = time until 50
ements
Basic Feedback Properties
Root Locus
• Drawing the root locus allows us to select K for good pole locations.
Intuition into the root-locus helps us design D0(s) with lead/ lag/ PI/
PID. . . controllers.
EXAMPLE : A13 = −2.4, A31 = −0.2. The row index of the bottom row is 3,
the column index of the first column is 1.
−1
is a 3-vector (or 3 × 1 matrix); its third component is v3 = −1.
Notational Conventions
• Some authors try to use notation that helps the reader distinguish
between matrices, vectors and scalars.
Zero Matrices
• The zero matrix (of size m × n) has all entries equal to zero.
• Sometimes written as 0m×n where subscript denotes size.
• Often just written as 0, the same symbol used to denote the number 0.
• You need to figure out the size of the zero matrix from the context.
• Zero matrices of different sizes are different matrices, even though we
use the same symbol (i.e., 0).
• In programming, this is called overloading; we say that the symbol 0 is
overloaded because it can mean different things depending on its
context (i.e., the equation it appears in).
• When a zero matrix is a (row or column) vector, we call it a zero (row
or column) vector.
Matrix Transpose
• Two matrices of the same size can be added together to form another
matrix (of the same size) by adding the corresponding entries.
• Matrix addition is denoted by the symbol +. (Thus the symbol + is
overloaded to mean scalar addition when scalars appear on its left-
and right-hand side, and matrix addition when matrices appear on its
left- and right-hand sides.)
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
ECE4520/5520, Linear Algebra (Matrix) Review 2–6
EXAMPLE :
" # " # " #
1 2 5 6 6 8
+ = .
3 4 7 8 10 12
• Note that (row or column) vectors of the same size can be added, but
you cannot add together a row vector and a column vector (except if
they are both scalars!).
• Matrix subtraction is similar:
EXAMPLE :
" # " #
1 2 0 2
−I = .
3 4 3 3
• Note that this gives an example where we have to figure out what size
the identity matrix is. Since you can only add (or subtract) matrices of
the same size, we conclude that I must refer to a 2 × 2 identity matrix.
• Matrix addition is commutative; i.e., if A and B are matrices of the
same size, then A + B = B + A.
• It is also associative; i.e., (A + B) + C = A + (B + C), so we write
both as A + B + C.
• We always have A + 0 = 0 + A = A; i.e., adding the zero matrix has
no effect.
Scalar Multiplication
3 6 −6 −12
• Sometimes you see scalar multiplication with the scalar on the right,
or even scalar division with the scalar shown in the denominator
(which just means scalar multiplication by one over the scalar), as in
" #
1 3 5
1 4 2 8 " #
2 4 6 0.5 1.5 2.5
2 5 · 2 = 4 10 ,
= ,
2 1 2 3
3 6 6 12
but these look ugly.
• Scalar multiplication obeys several laws you can determine for
yourself. e.g., if A is any matrix and α, β are any scalars, then
(α + β)A = α A + β A.
• It is useful to identify the symbols above. The + sign on the left is the
addition of scalars. The + sign on the right denotes matrix addition.
• Another simple property is (αβ)A = (α)(β A), where α and β are
scalars and A is a matrix. On the left side we have scalar-scalar
multiplication (αβ) and scalar-matrix multiplication; on the right side
we see two cases of scalar-matrix multiplication.
• Note that 0 · A = 0 (where the left-hand zero is the scalar zero, and
the right-hand zero is a matrix zero of the same size as A).
Matrix Multiplication
not even make sense (due to dimensions) and even if it does make
sense, it may have different dimension than AB so that equality in
AB = B A is meaningless.
Matrix-Vector Product
Matrix Powers
Matrix Inverse
Useful Identities
EXAMPLE :
" # " #
1 2 3
A= B=
0 2 1
h i h i
C= 1 0 D= 0
" # 1 2 3
A B
= 0 2 1 .
C D
1 0 0
• Block matrices may be added and multiplied as if the entries were
numbers, provided the corresponding entries have the right size and
you are careful about the order of multiplication.
" #" # " #
A B X AX + BY
= ,
C D Y C X + DY
provided the products AX, BY, C X and DY make sense.
Linear Functions
Linear Equations
• Define γi, j = (−1)i + j det(Mi, j ) where Mi, j is the matrix A with the ith
row and jth column removed. Then,
adj(A) = [γi, j ]T .
Null-Space
• The nullspace of A ∈
m×n
is defined as
(A) = x ∈ n Ax = 0 .
•
(A) is the set of vectors mapped to zero by y = Ax.
•
(A) is the set of vectors orthogonal to all rows of A.
• (A) gives ambiguity in x given y = Ax.
• If y = Ax and z ∈
(A) then y = A(x + z).
Range-Space
• The rangespace of A ∈
m×n
is defined as
(A) = Ax x ∈ n ⊆
m
.
•
(A) is the set of vectors that can be generated by y = Ax.
•
(A) is the span of the columns of A.
(nontrivial) facts:
• rank(A) = rank(A T ).
• rank(A) is maximum number of independent columns (or rows) of A.
Hence, rank(A) ≤ min(m, n).
• rank(A) + dim(
(A)) = n.
Interpreting y = Ax
• Ai j is the gain factor from the jth input (x j ) to the ith output (yi ).
• Thus,
where a j ∈
m
.
• Then, y = Ax can be written as
y = x 1 a1 + x 2 a2 + · · · + x n an
a1
Ax = (1)a1 + (−0.5)a2
PSfrag replacements a2 = (1.5, 1.5)
x = (1, −0.5)
PSfrag replacements
ã
hã, xi = 1
ã2T x = −2
PSfrag replacements
x
ã1
ã2
ã1T x = 10
x
(will show this later on). V is a collection of all the eigenvectors put
into a matrix.
• V −1 decomposes x into the “eigenvector coordinates”. 3 is a
diagonal matrix multiplying each component of the resulting vector by
the eigenvalue associated with that component, and V puts
everything back together.
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
ECE4520/5520, Linear Algebra (Matrix) Review 2–23
v = eigenvector.
• Repeat to find all eigenvectors. Assume that v1, v2, . . . vn are linearly
independent.
Avi = λi vi i = 1, 2, . . . , n
• AT = T 3 ➠ T −1 AT = 3.
• Not all matrices diagonalizable
" #
0 1
A= det(λI − A) = λ2
0 0
• One eigenvalue λ = 0. Solve for the eigenvectors
" #" # " #
0 1 va va
=0 ➠ all vectors of the form 6= 0.
0 0 vb 0
The Jordan Form
EXAMPLE : Consider
3 −1 1 1 0 0
1 1 −1 −1 0 0
0 0 2 0 1 1
A=
.
0 0 0 2 −1 −1
0 0 0 0 1 1
0 0 0 0 1 1
• From Matlab, we find that A has eigenvalue 2 with multiplicity 5 and
eigenvalue 0 with multiplicity 1:
det(λI − A) = (λ − 2)5λ.
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
ECE4520/5520, Linear Algebra (Matrix) Review 2–26
• rank(2I − A) = 2 so there are two Jordan blocks with eigenvalue 2.
• We can check this in Matlab: jordan(A)
0 0 0 0 0 0
0 2 1 0 0 0
0 0 2 1 0 0
J =
0 0 0 2 0 0
0 0 0 0 2 1
0 0 0 0 0 2
• Note that without further information (computation) the following form
might also be the Jordan form for A (but it isn’t)
0 0 0 0 0 0
0 2 0 0 0 0
0 0 2 1 0 0
J =
0 0 0 2 1 0
0 0 0 0 2 1
0 0 0 0 0 2
Cayley-Hamilton Theorem
• The square matrix A satisfies its own characteristic equation. That is,
if
χ(λ) = det(λI − A) = 0
then
χ(A) = 0.
because λi is an eigenvalue of A.
• So each element on the diagonal is zero, and we have shown the
proof.
• If A is not diagonalizable, the same proof may be repeated using the
Jordan form and Jordan blocks:
A = T −1 J T.
so
T
[A ⊗ I ] + [I ⊗ B] (X) = (C)
k
m f (t)
b
y(t)
A= , B= .
• Laplace transform
s X (s) − x(0) = AX (s) + BU (s)
Y (s) = C X (s) + DU (s),
or
(s I − A)X (s) = BU (s) + x(0)
X (s) = (s I − A)−1 BU (s) + (s I − A)−1 x(0)
and
−1
Y (s) = [C(s
| I − A)
{z B + D]} U (s) + C(s A)−1 x(0)} .
| I − {z
transfer function of system response to initial conditions
• So,
Y (s)
= C(s I − A)−1 B + D,
U (s)
but
adj(s I − A)
(s I − A)−1 = .
det(s I − A)
• Slightly easier to compute (for SISO systems)
Y (s)
= C(s I − A)−1 B + D
U (s)
" #
sI − A B
det
−C D
= .
det(s I − A)
We will develop this result at the end of this section of notes.
s −1 0
k b 1
det s+
m m m
−1 0 0
G(s) =
s −1
det k
b
s+
m m
1/m
= 2
s + (b/m)s + (k/m)
1
= .
ms 2 + bs + k
• Same result.
• Example shows that the characteristic equation for the system is
χ(s) = det(s I − A) = 0.
“State Space” block from the “Continuous” library, or we can make our
own. The following method has advantages because it gives us
explicit access to the state and other internal signals. It is a direct
implementation of the transfer function above, and the initial state
may be set by setting the initial integrator values.
K
D
u y
xdot 1 x
1 K K 1
s
B C
Note: All (square) gain blocks
K are MATRIX GAIN blocks
from the Math Library.
A
ẏ(t) 0 1 0 y(t) 0
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
ECE4520/5520, STATE-SPACE DYNAMIC SYSTEMS—CONTINUOUS-TIME 3–7
h i h i
y(t) = 0 0 1 x(t) + 0 u(t).
– Note the special form of A (“top companion matrix”).
v̇(t) 0 1 0 v(t) 0
represents the dynamics of v(t). All that remains is to couple in the
zeros of the system.
Y (s) = [b1s 2 + b2s + b3]V (s)
b1
b2 y(t)
R x 1c R x 2c R x 3c
u(t) b3
−a1
−a2
−a3
b3 b2 b1
R x 3o R x 2o R x 1o
y(t)
• Or,
−a1 1 0 b1
ẋ(t) = −a2 0 1 x(t) + b2 u(t)
−a3 0 0 b3
h i
y(t) = 1 0 0 x(t).
β1
β2 y(t)
x 1co x 2co
R R R x 3co
u(t) β3
1 1
x3 = (x2 − a1 x3) X 3(s) = U (s)
s s 3 + a1 s 2 + a2 s + a3
1 s + a1
x2 = (x1 − a2 x3) ➠ X 2(s) = 3 U (s)
s s + a1 s 2 + a2 s + a3
1 s 2 + a1 s + a2
x1 = (u − a3 x3) X 1(s) = 3 U (s).
s s + a1 s 2 + a2 s + a3
• Thus,
β3 + β2(s + a1) + β1(s 2 + a1s + a2)
Y (s) = U (s).
s 3 + a1 s 2 + a2 s + a3
• In order to get the correct transfer function, we must compute the {β i }
values to get the desired numerator:
1 0 0 β1 b1
a 1 1 0 β2 = b 2
a2 a1 1 β3 b3
−1
β1 1 0 0 b1
β2 = a 1 1 0 b 2
β3 a2 a1 1 b3
b1
b2 − a 1 b1
= .
b3 − a1b2 − a2b1 + a12b1
• Or,
0 0 −a3 1
ẋ(t) = 1 0 −a2 x(t) + 0 u(t)
0 1 −a1 0
h i
y(t) = β1 β2 β3 x(t).
u(t)
β3 β2 β1
−a1
−a2
−a3
Transformations
OBSERVATION : Consider
b1 s 2 + b 2 s + b 3
H (s) = 3 .
s + a1 s 2 + a2 s + a3
• Only six parameters in transfer function. But, A has 3 × 3, B has
3 × 1, C has 1 × 3: a total of 15 parameters.
• Appears that we have 9 degrees of freedom in state-space model.
Contradiction? h i
9 = size .
• Let " #
2 −1
T = .
−1 1
Notice that det(T ) = 1 so T is invertible. Let x c = T x̄ where x̄ is a new
state.
• Then,
˙ = (T −1 AT )x̄(t) + (T −1 B)u(t)
x̄(t)
y(t) = (C T )x̄(t).
Plugging in A, B, C and T :
" # " #
˙ = −2 0 x̄(t) + 1 u(t)
x̄(t)
0 −1 1
h i
y(t) = 1 1 x̄(t),
Amplitude
3
PSfrag replacements 1
−1
0 1 2 3 4 5 6
Time
• The systems have the same transfer function, but different responses
to initial states since the states have different interpretations.
EXAMPLE :
" #
0 1
ẋ = Ax, A=
−2 −3
" #−1
s −1
(s I − A)−1 =
2 s +3
" #
s+3 1 1
=
−2 s (s + 2)(s + 1)
2 1 1 1
− −
= s+1 s+2 s+1 s+2
−2 2 −1 2
[7mm] + +
s+1 s+2 s+1 s+2
• Have seen the key role of e At in the solution for x(t). Impacts the
system response, but need more insight.
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
ECE4520/5520, STATE-SPACE DYNAMIC SYSTEMS—CONTINUOUS-TIME 3–20
Dynamic Interpretation
• Write T −1 AT = 3 as T
−1
A =3T −1 with
w1T
T
w2
T −1 =
...
,
i.e., rows of T −1.
wnT
1.5
Amplitude
0.5
−0.5
PSfrag replacements
−1
−1.5
−2
0 50 100 150 200 250 300
Time (sec.)
−1
0 50 100 150 200 250
0.5
−0.5
0 50 100 150 200 250
1
−1
0 50 100 150 200 250
0.5
−0.5
0.5
0
−0.5
−1
0 50 100 150 200 250
1
−1
0 50 100 150 200 250
acements 0.5
Response 0
Amplitude −0.5
– Assume A is diagonalizable by T .
– Define new coordinates by x(t) = T x̃(t) so
˙ = T −1 Ax(t) = T −1 AT x̃(t) = 3x̃(t).
x̃(t)
– In new coordinate system, system is diagonal (decoupled).
• Trajectories consist of n independent
modes; that is,
1/s
x̃ 1 x̃i (t) = eλi t x̃i (0)
c2
c1
• Thus, Jordan blocks yield repeated poles and terms of the form t p eλt
in e At .
• Consider
G(s) = C(s I − A)−1 B + D
C[adj(s I − A)]B
= + D.
det(s I − A)
• Now,
det(s I − A) = s n + α1s n−1 + α2s n−2 + · · · + αn
and
C adj(s I − A)B = N (s) = [N1s n−1 + N2s n−2 + · · · + Nn ].
G 21 (s)
G 12 (s)
• Recall
sI − A B
det
−C D
G(s) = .
det(s I − A)
Ahah! (The −u 0 before gave us the correct sign in G(s)).
• In the MIMO case, with n state variables, p inputs and q outputs, a
transmission zero is any value z i for which
zi I − A B
rank < n + min{ p, q}
−C D
v(t)
The z-Transform
(z)
rag replacements
π
j
T
• The subscript “d” is used here to emphasize that, in general, the “A”,
“B”, “C” and “D” matrices are DIFFERENT for discrete-time and
continuous-time systems, even if the underlying plant is the same.
• I will usually drop the “d” and expect you to interpret the system from
its context.
1 V (z)
G p (z) = =
z 3 + a1 z 2 + a2 z + a3 U (z)
➠ v[k + 3] + a1v[k + 2] + a2v[k + 1] + a3v[k] = u[k].
Then
v[k + 3]
x[k + 1] = v[k + 2]
v[k + 1]
−a1 −a2 −a3 v[k + 2] 1
= 1 0 0 v[k + 1] + 0 u[k].
0 1 0 v[k] 0
• We now add zeros.
b1 z 2 + b 2 z + b 3 Y (z)
G(z) = 3 = .
z + a1 z 2 + a2 z + a3 U (z)
V (z)
Break up transfer function into two parts. contains all of the
U (z)
Y (z)
poles of . Then,
U (z)
Y (z) = b1 z 2 + b2 z + b3 V (z).
Or,
y[k] = b1v[k + 2] + b2v[k] + b3v[k].
Then
−a1 −a2 −a3 v[k + 2] 1
x[k + 1] = 1 0 0 v[k + 1] + 0 u[k]
0 1 0 v[k] 0
h i h i
y[k] = b1 b2 b3 x[k] + 0 u[k].
Canonical Forms
b1
b2 y[k]
x 1c x 2c x 3c
u[k] z −1 z −1 z −1 b3
−a1
−a2
−a3
• z-transform
z X (z) − zx[0] = AX (z) + BU (z)
Y (z) = C X (z) + DU (z)
or
(z I − A)X (z) = BU (z) + zx[0]
X (z) = (z I − A)−1 BU (z) + (z I − A)−1 zx[0]
and
−1
Y (z) = [C(z
| I − A)
{z B + D]} U (z) + C(z A)−1 zx[0]} .
| I − {z
transfer function of system response to initial conditions
• So,
Y (z)
= C(z I − A)−1 B + D
U (z)
• Same form as for continuous-time systems.
• Poles of system are roots of det[z I − A] = 0.
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
ECE4520/5520, STATE-SPACE DYNAMIC SYSTEMS—DISCRETE-TIME 4–7
Transformation
Homogeneous Part
• First, consider the scalar case
x[k + 1] = ax[k], x[0].
Forced Solution
• The full solution is:
k−1
X
x[k] = Ak x[0] + Ak−1− j Bu[ j] .
j =0
| {z }
convolution
• This can be proved by induction from the equation
x[k + 1] = Ax[k] + Bu[k], x[0]
• With malice aforethought, break up the integral into two pieces. The
first piece will become A d times x(kT ). The second part will become
Bd times u(kT ).
Z kT Z (k+1)T
= e A((k+1)T −τ ) Bu(τ ) dτ + e A((k+1)T −τ ) Bu(τ ) dτ
0 kT
Z kT Z (k+1)T
AT A(kT −τ )
= e e Bu(τ ) dτ + e A((k+1)T −τ ) Bu(τ ) dτ
0 kT
Z (k+1)T
AT
=e x(kT ) + e A((k+1)T −τ ) Bu(τ ) dτ.
kT
• Similarly,
y[k] = C x[k] + Du[k].
That is, C d = C; Dd = D.
Calculating Ad , Bd , Cd and Dd
Ad=expm(A*T)
• If Matlab is not handy, then you need to work a little harder. Recall
from the previous set of notes that e At = −1[(s I − A)−1]. So,
AT −1
−1
e = [(s I − A) ] t=T ,
= A−1(e AT − I )B
= A−1(Ad − I )B.
[Ad,Bd]=c2d(A,B,T)
Overview
and we have initial conditions y(0), ẏ(0), ÿ(0), how do we find x(0)?
y(0) = C x(0) + Du(0)
ẏ(0) = C(Ax(0)
| + Bu(0)}) + D u̇(0)
{z
ẋ(0)
• In general,
y (k)(0) = C Ak x(0) + C Ak−1 Bu(0) + · · · + C Bu (k−1)(0) + Du (k)(0),
or,
y(0) C D 0 0 u(0)
ẏ(0) = C A x(0) + C B D 0 u̇(0) ,
ÿ(0) C A2 C AB C B D ü(0)
| {z } |
(C, A)
{z }
CONCLUSION : If
is nonsingular, then we can determine/estimate the
initial state of the system x(0) using only u(t) and y(t) (and therefore,
we can estimate x(t) for all t ≥ 0).
EXAMPLE : Observability canonical form:
0 1 0 β1
ẋ(t) = 0 0 1 x(t) + β2 u(t)
C A2 0 0 1
• This is why it is called observability form!
Observers
A, B, C, D
u(t) y(t)
x
u y
1 1
s
u̇
ü
− ẏ
ÿ
s
s2 s2
−1 x̂
where
g1
n−1
−1
g2 = B AB ··· A B xd
g3
where x d is the desired x(0+) vector.
• If is nonsingular, we say {A, B} is a controllable pair and the system
is controllable.
0 1 −a1 0
h i
y(t) = β1 β2 β3 x(t).
• Then
= [B AB ··· An−1 B]
1 0 0
= 0 . . . 0 = In .
0 0 1
• This is why it is called controllability form!
• If a system is controllable, we can instantaneously move the state
from any known state to any other state, using impulse-like inputs.
• Later, we’ll see that smooth inputs can effect the state transfer (not
instantaneously, though!).
Sfrag replacements T T T T
DUALITY: If {A, B, C, D} controllable ⇐⇒ {A , C , B , D } is observable.
1 1 1 1
λ1
u(t) .. y(t)
.
R x n (t)
γn δn
λn
• When controllable?
When observable?
C δ1 δ2 · · · δn
C A λ 1 δ1 · · ·
= ... = λ δ
2 2
...
λ δ
n n
n−1 n−1 n−1 n−1
CA λ 1 δ1 λ 2 δ2 · · · λ n δn
1 1 ··· 1 δ1 0
λ1 λ 2 · · · λ n δ 2
=
...
...
.
λn−1
1 λn−1
2 · · · λn−1
n 0 δn
| {z }
Vandermonde matrix
• Singular?
det{
} = (δ1 · · · δn ) det{ } = (δ1 · · · δn )
Y
cements
(λ j − λi ).
i< j
Discrete-Time Controllability
Discrete-Time Reachability
• However, if A is singular, (1) and (3) are equivalent but not (2) and (3).
EXAMPLE :
0 1 0 0
x[k + 1] = 0 0 1 x[k] + 0 u[k].
0 0 0 0
• Its controllability matrix has rank 0 and the equation is not controllable
in (1) or (3).
• However, Ak = 0 for k ≥ 3 so x[3] = A 3 x[0] = 0 for any initial state
x[0] and any input u[k].
• Thus, the system is controllable to the origin but not controllable from
the origin or reachable.
• Definition (1) encompasses the other two definitions, so is used as
our definition of controllable.
Discrete-Time Observability
• So,
y[0] u[0]
x[0] =
−1
...
−
...
.
y[n − 1] u[n − 1]
• If
is full-rank or nonsingular, x[0] may be reconstructed with any
y[k], u[k]. We say that {C, A} form an “observable pair.”
• Do more measurements of y[n], y[n + 1], . . . help in reconstructing
x[0]? No! (Caley-Hamilton theorem). So, if the original state is not
“observable” with n measurements, then it will not be observable with
more than n measurements either.
• Since we know u[k] and the dynamics of the system, if the system is
observable we can determine the entire state sequence x[k], k ≥ 0
once we determine x[0]
n−1
X
x[n] = An x[0] + An−1−i Bu[k]
i =0
y[0] u[0] u[n − 1]
= An
−1 ...
−
...
+
...
.
y[n − 1] u[n − 1] u[0]
• A perfectly good observer (no differentiators...)
• Physical intuition can be better than finding . Other tools can help. . .
SIGNIFICANCE : Consider
Z t1
At1
x(t1) = e x(0) + e A(t1−τ ) Bu(τ ) dτ.
0
We claim that for any x(0) = x 0 and any x(t1) = x1 the input
T
u(t) = −B T e A (t1−t) Wc−1(t1) e At1 x0 − x1
• If A is stable, Wc−1 > 0 which implies “we can’t get anywhere for free”.
• If A is unstable, then Wc−1 can have a nonzero nullspace Wc−1 z = 0 for
some z 6= 0 which means that we can get to z using u’s with energy
as small as you like! (u just gives a little kick to the state; the
instability carries it out to z efficiently).
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
ECE4520/5520, OBSERVABILITY AND CONTROLLABILITY 5–16
• If a system is observable,
Z t
T
Wo(t) = e A τ C T Ce Aτ dτ
0
is nonsingular for t > 0.
SIGNIFICANCE : We can prove that
Z t1
T
x(0) = Wo−1(t1) e A t C T ȳ(t) dt
0
where Z t
ȳ(t) = y(t) − C e A(t−τ ) Bu(τ ) dτ − Du(t).
0
• If A is stable, then Wo−1 > 0 and we can’t estimate the initial state
perfectly even with an infinite number of measurements u(t) and y(t)
for t ≥ 0 (since memory of x(0) fades).
• If A is not stable then Wo−1 can have a nonzero nullspace Wo−1 x(0) = 0
which means that the covariance goes to zero as t → ∞.
• Wo may be a better indicator of observability than
.
is nonsingular. In particular,
∞
X
Wdc = Am B B T (A T )m
m=0
ASIDE : To show that the above is indeed a Lyapunov equation, (or may
be converted to a standard Lyapunov equation for solving it), let
Ac = (A + I )−1(A − I )
is nonsingular. In particular,
∞
X
Wdo = (A T )m CC T Am
m=0
• Given a system
ẋ(t) = Ax(t) + Bu(t)
y(t) = C x(t) + Du(t),
• Thus
T −1 AT = Aco , T −1 B = Bco
C T = Cco , D = Dco .
• By induction,
At1 = t2 = AB
At2 = t3 = A2 B, and so forth . . .
so
T = [B AB · · · An−1 B] = .
matrices of the
two different realizations.
new = Bnew Anew Bnew · · · Anew Bnew
n−1
−1 −1 −1 −1 n−1 −1
= T Bold T Aold T T Bold · · · T Aold T T Bold
= T −1 old ,
or
T =
−1
old new .
Canonical Decompositions
EXAMPLE : Consider
1 s−1 s −1
= = 2 .
s + 1 (s + 1)(s − 1) s − 1
• In observer-canonical form,
" # " #
0 1 1
ẋ(t) = x(t) + u(t)
1 0 −1
h i
y(t) = 1 0 x(t).
Or,
n
X
y(t) = eλi t Cvi (wiT x(0)).
i =1
Summary
Controllable Observable
but not but not
observable controllable
Neither
observable nor
controllable
• Cancel to get
C adj(s I − A)B br (s)
G(s) = +D=
det(s I − A) ar (s)
where br (s) and ar (s) are coprime (“r” means “reduced”).
• Because of cancellation, K = deg(ar ) < deg(det(s I − A)) = n.
• Consider controller canonical form realization of br (s)/ar (s), for
example.
• It has k states, but same transfer function as {A, B, C, D},
contradicting that {A, B, C, D} minimal.
Ac
A12 y(t)
Uncontrollable x 2(t)
R
part of Cc̄
realization
Ac̄
" # " #
Ac A12 Bc h i
Ā = , B̄ = , C̄ = Cc Cc̄ , D̄ = [0].
0 Ac̄ 0
Same transfer function using A c , Bc, Cc as Ā, B̄, C̄. Therefore
uncontrollable and/or unobservable means not minimal.
IV H⇒ III : Non-minimal means uncontrollable or unobservable.
C B C AB C A2 B C̄ B̄ C̄ Ā B̄ C̄ Ā2 B̄
• + 2 + + ··· = + 2 + + ···
s s s3 s s s3
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
ECE4520/5520, OBSERVABILITY AND CONTROLLABILITY 5–30
• Consider
C
CA h
i
2 2 n−1
C A B AB A B · · · A B
=
...
C An−1
CB · · · C An−1 B C̄ B̄ · · · C̄ Ān−1 B̄
= ... ... ... ...
=
C An−1 B · · · C A2n−2 B C̄ Ān−1 B̄ · · · C̄ Ā2n−2 B̄
x x
C̄
n−1 r
B̄ Ā B̄ · · · Ā B̄
C̄ Ā
y
n
=
.
.
. 0 x
C̄ Ān−2
0 n−r
y C̄ Ān−1
y
←−− −−→ ←−− −−→ ←−−−−−−−−−−−−−
−−−−−−−−−−−−−→
r n−r n
• Therefore det(
) det( ) = 0, so the system is either unobservable,
or uncontrollable, or both.
• System dynamics
ẋ(t) = Ax(t) + Bu(t)
y(t) = C x(t) + Du(t).
u(t) A, B, C, D
r (t) y(t)
x
• So,
k1 − 3 = 11, or, k1 = 14
1 − 2k1 + k2 = 30, or, k2 = 57.
• K = [14 57].
• So, with the n parameters in K , can we always relocate all n eig(A CL )?
u(t) = −K x(t).
" #
1 − k1 1 − k2
ACL = A − BK =
0 2
det(s I − ACL ) = (s − 1 + k1)(s − 2).
b1
b2 y(t)
R x 1c R x 2c R x 3c
u(t) b3
−a1
−a2
−a3
−k1
−k2
−k3
• So,
−1
1 a1 · · · an−1
i 0 1
an−2
h −1
K = (α1 − a1) · · · (αn − an )
... . . . ...
.
0 1
• This is called the Bass-Gura formula for K .
Ackermann’s Formula
by Cayley-Hamilton.
• Also,
χd (Ac ) = Anc + α1 An−1
c + · · · + αn I
= (α1 − a1)An−1
c + · · · + (αn − an )I.
• Therefore,
K = K c T −1
h i
= 0 ··· 1 χd (T −1 AT )T −1
h i
= 0 ··· 1 T −1χd (A)
1 c −1χd (A)
h i
= 0 ···
1 −1χd (A).
h i
= 0 ···
D
r y
xdot 1 x
1 K K 1
u s
B C
K
Note: All (square) gain blocks
A are MATRIX GAIN blocks
from the Math Library.
K
Some Comments
Reference Input
EXAMPLE :
" # " #
1 1 1 h i
A= , B= , K = 14 57 .
1 2 0
h i Y (s)
• Let C = 1 0 . Then = C(s I − A + B K )−1 B.
R(s)
" #−1 " #
Y (s) h i s + 13 56 1
= 1 0
R(s) −1 s − 2 0
" #
s − 2 −56
" #
h i 1 s + 13 1 s−2
= 1 0 2 = 2 .
s + 11s − 26 + 56 0 s + 11s + 30
−2
• Final value theorem for step input, y(t) → 6= 1 !
30
0.06
Step Response
0.04
Amplitude
0.02
−0.04
−0.06
−0.08
0 1 2 3 4 5 6 7 8 9 10
Time (sec.)
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
ECE4520/5520, CONTROLLER/ ESTIMATOR DESIGN 6–10
OBSERVATION : A constant output yss requires constant state x ss and
constant input u ss . We can change the tracking problem to a
regulation problem around u(t) = u ss and x(t) = x ss .
(u(t) − u ss ) = −K (x(t) − x ss ).
N x rss .
xss = |{z}
n×1
• At steady state,
ẋ(t) = 0 = Ax ss + Bu ss
y(t) = rss = C xss + Du ss .
• Therefore,
Y (s) Y (s) −15s + 30
= × N̄ =
R(s) new R(s) old s 2 + 11s + 30
which has zero steady-state error to a unit-step.
u(t) A, B, C, D
r (t) Nu y(t)
x
K
Sfrag replacements
Nx
u(t) A, B, C, D
r (t) N̄ y(t)
x
0.6
Amplitude
0.4
0.2
PSfrag replacements 0
−0.2
−0.4
−0.6
−0.8
0 1 2 3 4 5 6 7 8 9 10
Time (sec.)
Pole Placement
• Put other poles so that the time response is much faster than this
dominant behavior.
• Place them so that they are “sufficiently damped.”
• Be very careful about moving poles too far. Takes a lot of control
effort.
• Can also choose closed-loop poles to mimic a system that has
performance that you like. Set closed-loop poles equal to this
prototype system.
• Scaled to give settling time of 1 sec. or bandwidth of ω = 1 rad/sec.
0.8 0.8
Amplitude
Amplitude
0.6 0.6
0.4 0.4
0 0
0 2 4 6 8 10 12 14 16 0 0.5 1 1.5 2 2.5 3
0.8 0.8
Amplitude
Amplitude
0.6 0.6
0.4 0.4
0 0
0 2 4 6 8 10 12 14 16 0 0.5 1 1.5 2 2.5 3
Bessel pole locations for ωo = 1 rad/sec. Bessel pole locations for ts = 1 sec.
1: −1.000 1: −4.620
2: −0.866 ± 0.500 j 2: −4.053 ± 2.340 j
3: −0.746 ± 0.711 j −0.942 3: −3.967 ± 3.785 j −5.009
4: −0.657 ± 0.830 j −0.905 ± 0.271 j 4: −4.016 ± 5.072 j −5.528 ± 1.655 j
5: −0.591 ± 0.907 j −0.852 ± 0.443 j −0.926 5: −4.110 ± 6.314 j −5.927 ± 3.081 j −6.448
6: −0.539 ± 0.962 j −0.800 ± 0.562 j −0.909 ± 0.186 j 6: −4.217 ± 7.530 j −6.261 ± 4.402 j −7.121 ± 1.454 j
6–14
ECE4520/5520, CONTROLLER/ ESTIMATOR DESIGN 6–15
−20
−40
Magnitude
−60
−80
−120
−140
−1 0 1
10 10 10
ω, (rads/sec.)
EXAMPLE :
−5 −4 0 1
A= 1 0 0 , B=0
1
G(s) = ➠ 0 1 0 0
s(s + 1)(s + 4)
h i
C= 0 0 1 .
want ts = 2, and 3rd-order Bessel.
−5.009 Open- and Closed-Loop Step Responses
s1 =
0.5
= −2.505
2 0.45
0.4
−3.967 ± 3.784 j
Amplitude
0.35
s2,3 = 0.3
2 0.25
PSfrag
= −1.984 ± 1.892 j replacements 0.2
0.15
0.05
17.456s + 18.827. 0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
h i Time
K = 1.473 13.456 18.827 .
KI u(t) A, B, C, D y(t)
r (t)
s
x
K
Nx
and THEN design K I and K such that the system had good
closed-loop pole locations.
• Note that we can include the integral state into our normal
state-space form by augmenting the system dynamics
" # " #" # " # " #
ẋ I (t) 0 −C x I (t) 0 I
= + u(t) + r(t)
ẋ(t) 0 A x(t) B 0
y(t) = C x(t) + Du(t).
• Note that the new “A” matrix has an open-loop eigenvalue at the
origin. This corresponds to increasing the system type, and integrates
out steady-state error.
• The control law is,
" #
h i x (t)
I
u(t) = − K I K + K N x r(t).
x(t)
• So, we now have the task of choosing n + n I closed-loop poles.
0.8
Amplitude
0.6
0.2
0
0 1 2 3 4 5
Time
Reference Input
Integral Control
placements
• Again, we augment our system with a (discrete-time) integrator:
Nx
• We can include the integral state into our normal state-space form by
augmenting the system dynamics
2. Find the nth-order poles from the table of constant settling time,
and divide pole locations by ts .
3. Convert s-plane locations to z-plane locations using z = e sT .
4. Use Acker/place to locate poles. Simulate and check control effort.
Estimator Design
Open-Loop Estimator
w(t)
x(t)
u(t) A, B C y(t)
x̂(t)
A, B C ŷ(t)
• We want x̃(t) = 0.
• For our estimator,
˙
˙ = ẋ(t) − x̂(t)
x̃(t)
= Ax(t) + Bu(t) − A x̂(t) − Bu(t)
= A x̃(t).
So,
x̃(t) = e At x̃(0).
– Speed up convergence.
– Reduce sensitivity to model uncertainties.
– Counteract disturbances.
– Have convergence even when A is unstable.
Sfrag replacements
Closed-Loop Estimator
w(t)
x(t) y(t)
u(t) A, B C
x̂(t) ŷ(t)
A, B C
or, x̂(t) → x(t) if A − LC is stable, for any value of x̂(0) and any u(t),
whether or not A is stable.
• In fact, we can look at the dynamics of the state estimate error to
quantitatively evaluate how x̂(t) → x(t).
˙ = (A − LC) x̃(t)
x̃(t)
• So, for our estimator, we specify the convergence rate of x̂(t) → x(t)
by choosing desired pole locations: Choose L such that
χob,des (s) = det (s I − A + LC) .
K
y
L
2
u
1 xhat
1 K K
s yhat
B C
K
Note: All (square) gain blocks
A are MATRIX GAIN blocks
from the Math Library.
K
• By duality,
iT
hh i
−1
L= 0 ··· 1 (A, B) χd (A) A←A T
.
B←C T
0
.
h
n−1T
i−T ..
= χdT (A T ) C T T
A C T
··· A C T
0
1
−1
C 0
.
CA ..
= χd (A)
...
0
C An−1 1
0
.
.
= χd (A)
(C, A) .
−1 .
0
1
• In Matlab,
L=acker(A’,C’,poles)’;
L=place(A’,C’,poles)’;
x[k] y[k]
u[k] A, B C
x̂ p [k] ŷ[k]
A, B C
• So,
l1 − 2 = −1.6
l2 T − l1 + 1 = 0.68
or " #
0.4
Lp = .
0.08/T
• The estimator is
x̂ p [k + 1] = A x̂ p [k] + Bu[k] + L p y[k] − C x̂ p [k]
= A − L p C x̂ p [k] + Bu[k] + L p y[k],
or
"
2
" #
0.6 T T 0.4
#
ŷ p [k + 1] ŷ p [k]
= −0.08
+ 2 u[k] + 0.08 y[k].
ẏˆ p [k + 1] 1 ẏˆ p [k] T
T T
• In general, we can arbitrarily select the prediction estimator poles iff
{C, A} is observable.
• Now that we have a structure to estimate the state x(t), let’s feed
back x̂(t) to control the plant. That is,
u(t) = r(t) − K x̂(t),
x̂(t) ˙
x̂(t) = A x̂(t) + Bu(t) +
−K
r (t) L y(t) − ŷ(t)
D(s)
so that we have
ẋ(t) = Ax(t) − B K x̂(t)
˙x̂(t) = (A − B K ) x̂(t) + L C x(t) + Du(t) − C x̂(t) − Du(t)
= (A − B K − LC) x̂(t) + LC x(t).
The Compensator—Continuous-Time
• What is D(s)? We know the closed-loop poles, which are the roots of
1 + D(s)G(s) = 0, and we know the plant’s open-loop poles. What are
the dynamics of D(s) itself?
• Start with a state-space representation of D(s)
˙ = (A − B K − LC)x̂(t) + Ly(t) − L Du(t)
x̂(t)
u(t) = −K x̂(t)
The Compensator—Discrete-Time
A − B K − L pC
+L p D K
• Note that the control signal u[k] only contains information about
y[k − 1], y[k − 2], . . ., not about y[k].
• So, our compensator is not taking advantage of the most current
measurement. (More on this later).
for loop:
u=-K*xhatp;
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
ECE4520/5520, CONTROLLER/ ESTIMATOR DESIGN 6–35
% now, wait until sample time...
A2D(y);
D2A(u); % u depends on past y, not current.
xhatp=(A-B*K-Lp*C+Lp*D*K)*xhatp+Lp*y;
end loop.
1
EXAMPLE : G(s) = ; T = 1.4 seconds.
s2
• Design a compensator such that response is dominated by the poles
z p = 0.8 ± 0.25 j.
• System description
" # " #
1 1.4 1 h i
A= , B= , C= 1 0 , D = [0],
0 1 1.4
2
• Control design: Find
h K such that
i det (z I − A + B K ) = z − 1.6z + 0.7.
This leads to K = 0.05 0.25 .
• Estimator design: Choose poles to be faster than control roots. Let’s
radially project z p = 0.8 ± 0.25 j toward the origin, or z pe = 0.8z p .
2
• Find L p "
such that
# det z I − A + L p C = z − 1.28z + 0.45. This leads
0.72
to L p = .
0.12
• So, our compensator is
" # " #
0.23 1.16 0.72
x̂ p [k + 1] = x̂ p [k] + y[k]
−0.19 0.65 0.12
h i
u[k] = −0.05 −0.25 x̂ p [k],
or,
z − 0.87
D(z) = −0.68
z − 0.44 ± 0.423 j
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
ECE4520/5520, CONTROLLER/ ESTIMATOR DESIGN 6–36
z − 0.87
= −0.68 .
z 2 − 0.88z + 0.374
• Let’s see the root locus of D(z)G(z). (Note, since D(z) has a negative
sign, must use 0◦ locus).
1
0.8
0.6
0.4
Imag Axis
0.2
−0.2
−0.4
−0.6
−1
−1 −0.5 0 0.5 1
Real Axis
• Using the state representation of the plant and our compensator, we
can simulate the closed-loop system to find x[k], x̂ p [k], x̃[k], and u[k].
EXAMPLE : In Matlab,
ements 0.5
0.4
ẏ ẏ˜
x̃
0 PSfrag replacements 0.2
0
−0.5
−0.2
−1 −0.4
0 5 10 15 20 25 30 0 5 10 15 20 25 30
Time Time
Current Estimator/ Compensator
x̂ c [k], x̂ p [x]
x̂ p [k + 1]
x̂ p [k − 1]
x̂ c [k]
Tune up estimate
from y[k + 1]
x̂ c [k − 1] x̂ c [k + 1]
x̂ p [k] Predict from value of
x̂ c [k − 2] x̂ c [k] and u[k]
Implementation:
• “Time update”: Predict new state from old state estimate and system
dynamics
x̂ p [k] = A x̂ c [k − 1] + Bu[k − 1].
= (I − L c C) x̂ p [k + 1] + L c y[k + 1]
= (I − L c C) A x̂c [k] + Bu[k] + L c y[k + 1]
= (A − L c C A) x̂c [k] + (B − L c C B) u[k] + L c y[k + 1].
• So,
x̂c [k] = f (x̂ c [k − 1], u[k − 1], y[k]).
• So, in summary
x̃ = x − x̂ p ➠ x̃[k + 1] = (A − L c C A) x̃[k]
x̃ = x − x̂ c ➠ x̃[k + 1] = (A − AL c C) x̃[k].
• These estimate errors have the same poles. They represent the
replacements
dynamics of the block diagrams:
x̂ p [k]
u[k] B z −1 C y[k]
A
Lp
Prediction Estimator
x̂ p [k]
u[k] B z −1 C y[k]
A Lc
x̂ c [k]
Current Estimator
Design of L c
1. Relate coefficients of
det (z I − A + L c C A) = χob,des (z).
In Matlab,
Lc=acker(A’,(C*A)’,poles)’;
Lc=place(A’,(C*A)’,poles)’;
3. Find L p and then L c = A−1 L p .
• Plant equations
x[k + 1] = Ax[k] + Bu[k]
y[k] = C x[k].
• Estimator equations
x̂ p [k + 1] = A x̂ c [k] + Bu[k]
x̂c [k] = x̂ p [k] + L c y[k] − C x̂ p [k] .
• Control
u[k] = −K x̂ c [k].
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
ECE4520/5520, CONTROLLER/ ESTIMATOR DESIGN 6–42
• Therefore, we have
x[k + 1] = Ax[k] − B K x̂ c [k]
x̂c [k + 1] = (I − L c C) A x̂c [k] + (I − L c C) Bu[k] + L c y[k + 1]
= (I − L c C) (A − B K ) x̂c [k] + L c y[k + 1].
• Therefore
u[k] = −K x̂ c [k]
also works!
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
ECE4520/5520, CONTROLLER/ ESTIMATOR DESIGN 6–43
so
U (z)
D(z) = = −K (z I − (I − L c C)(A − B K ))−1 L c z
Y (z)
or
D(z) = −K (z I − A + B K + L c C A − L c C B K )−1 L c z.
(I − L c C)( A − B K )
• Therefore
x̂ p [k + 1] = Ā x̂ p [k] + B̄ y[k]
u[k] = C̄ x̂ p [k] + D̄ y[k]
where
Ā = (A − B K )(I − L c C) C̄ = −K (I − L c C)
B̄ = (A − B K )L c D̄ = −K L c
• Our modified transfer function for D(z) is
D(z) = D̄ + C̄(z I − Ā)−1 B̄
or
D(z) = −K L c − K (I − L c C) (z I − (A − B K )(I − L c C))−1 (A − B K )L c.
−K L c
ments
y[k] ( A − B K )L c z −1 −K (I − L c C) u[k]
( A − B K )(I − L c C)
xhatp=xhatpnew
A2D(y)
xhatc=xhatp+Lc*(y-C*xhatp)
u=-K*xhatc
D2A(u)
xhatpnew=A*xhatc+B*u
xhatp=xhatpnew
upartial=-K*(I-Lc*C)*xhatp
A2D(y)
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
ECE4520/5520, CONTROLLER/ ESTIMATOR DESIGN 6–46
u=upartial-K*Lc*y
D2A(u)
xhatpnew=A*xhatp+B*u+A*Lc*(y-C*xhatp)
1
EXAMPLE : G(s) = ; T = 1.4 seconds.
s2
• System description
" # " #
1 1.4 1 h i
A= , B= , C= 1 0 ,
0 1 1.4
• Pick closed-loop poles as we did for prediction estimator
χdes (z) = z 2 − 1.6z + 0.7
χob,des (z) = z 2 − 1.28z + 0.45.
h i
• Control design K = 0.05 0.25 .
• Estimator design
" # " #
0.72 0.55
L c = A−1 L p = A−1 = .
0.12 0.12
• Our compensator is described by
" # " #
0.29 1.16 0.66
x̂ p [k + 1] = x̂ p [k] + y[k]
−0.11 0.65 0.04
h i
u[k] = 0.0067 −0.25 x̂ p [k] − 0.06y[k]
or
z(z − 0.85)
D(z) = −0.06
z − 0.47 ± 0.31 j
z(z − 0.85)
= −0.06 2 .
z − 0.94z + 0.316
• The root locus
0.8
0.6
0.4
Imag Axis
0.2
−0.2
−0.4
−0.6
−1
−1 −0.5 0 0.5 1
Real Axis
• Compare to prediction estimator.
EXAMPLE : In Matlab,
ements 0.5
ẏ 0.4
ẏ˜
x̃
0 PSfrag replacements
0.2
−0.5
0
−1 −0.2
0 5 10 15 20 25 30 0 5 10 15 20 25 30
Time Time
• Why construct the entire state vector when you are directly measuring
a state? If there is little noise in your sensor, you get a great estimate
by just letting
x̂1 = y (C = [1 0 . . . 0]).
So
y = C x = xa .
Let
Br u r [k] = Aba xa [k] + Bbu[k]
or
A ← Abb ; Bu[k] ← Br u r [k]; C ← Aab .
• Reduced-order estimator:
x̂b [k + 1] = Abb x̂b [k] + Br u r [k] + L r z[k] − Aab x̂b [k] .
Design of L r
• Relate coefficients of
det (z I − Abb + L r Aab ) = χr,des (z).
Lr=acker(Abb’,Aab’,poles)’;
Lr=place(Abb’,Aab’,poles)’;
Reduced-Order Compensator
• Control law:
u[k] = −K a xa [k] − K b x̂b [k]
h i
= − K a K b x̂[k].
• Estimator:
x̂b [k + 1] = Abb x̂b [k] + Aba xa [k] + Bbu[k] +
L r xa [k + 1] − Aaa xa [k] − Ba u[k] − Aab x̂b [k]
using
u[k] = −K a C x[k] − K b x̂b [k]
L r xa [k + 1] = L r C (Ax[k] + Bu[k])
= L r C Ax[k] − L r Ba K a C x[k] − L r Ba K b x̂[k].
" #
x[k + 1]
=
x̂b [k + 1]
" #" #
A − B K aC −B K b x[k]
L r C A + Aba C − Bb K a C − L r Aaa C Abb − Bb K b − L r Aab x̂b [k]
• What is D(z)?
x̂b [k + 1] = (Abb − Bb K b + L r Ba K b − L r Aab ) x̂b [k]
+ (Aba + L r Ba K a − L r Aaa − Bb K a ) y[k]
+L r y[k + 1]
x̂b [k + 1] = Ā x̂b [k] + B̄ y[k] + L r y[k + 1],
and
u[k] = −K b x̂b [k] − K a y[k],
• Reduced-order estimator:
T2
A as above, Ba = = 0.98; Bb = T = 1.4.
2
5 z − 1+ T5
T
1
D(z) = − 1+ .
20 T z + T8
• In our case, T = 1.4,
z − 0.781
D(z) = −0.229
z + 0.175
• Root locus and transient response
0.8
System and Estimator State Dynamics
0.6 1
y and ŷ
0.4
PSfrag replacements
Imag Axis
x and x̂ p
0
−0.2 0
−0.4
−0.6
cements −0.5
−0.8
−1
−1
−1 −0.5 0 0.5 1 0 5 10 15 20 25 30 35
• As was the case for finding the control law, the design of an estimator
(for single-output plants) simply consists of
Dominated by control-
PSfrag replacements law dynamics
• If v[k] = 0 then
• If w[k] = 0 then
– If A − L p C fast:
∗ Small transient response effects.
∗ Fast correction of model, disturbances.
∗ Low noise rejection.
– If A − L p C slow:
∗ Significant transient response effect.
∗ Slow correction of modeling errors, disturbances.
∗ High noise rejection.
• In general
1. Place poles two to six times faster than controller poles and in
well-damped locations. This will limit the estimator influence on
output response.
2. If the sensor noise is too big, the estimator poles can be placed as
much as two times slower than controller poles.
• A matrix is cyclic if it has one and only one Jordan block associated
with each distinct eigenvalue. Note, this does not imply that all
eigenvalues are distinct. (Although, if all eigenvalues are distinct, the
matrix is cyclic).
• If the n-dimensional p-input pair {A, B} is controllable and if A is
cyclic, then for almost any p × 1 vector v, the single-input pair {A, Bv}
is controllable.
• Design method:
is {A, Bv, C, D} with input u 0(t) and output y(t). Use single-input
design methods such as Bass-Gura or Ackermann to find k to
place the poles of the single-input system. Then, the overall state
feedback is: u(t) = −kvu 0(t), or, K = kv.
K1
u(t)
R x(t)
r (t) B C y(t)
DESIGN METHOD :
EXAMPLE : Consider
ẋ(t) = x(t) − u(t)
PSfrag replacements
Amplitude
Time
• As k → −∞ (i.e., large) u(t) looks more and more like δ(t), the input
we found earlier which (instantaneously) moves x(t) to 0!
• To see this, Z ∞
(−k)
u(t) dt = ,
0 (−k) − 1
Z
for −k large, u(t) dt = 1. Clearly u(t) is “bunching up” near t = 0 for
−k large.
• In general, as we relocate our eigenvalues farther and farther to the
left, so that the closed-loop system is faster and faster, our plant input
begins to look like the impulsive inputs we considered earlier.
• Once again, the tradeoff is speed versus gain/ size of input.
Cost Functions
• We will find the K such that u[k] = −K x[k] minimizes this cost.
– We make ρ large if we don’t want large inputs (“high cost of
control”);
– We make ρ small if we want fast response and don’t mind large
inputs (“cheap control”).
EXAMPLE : Consider (where x[k] is a scalar)
x[k + 1] = x[k] + u[k]
with
u[k] = −K x[k].
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
ECE4520/5520, LINEAR QUADRATIC REGULATOR 7–3
So,
√
−1 + 1 + 4ρ
K opt = .
2ρ
(The other solution is a maximum, not a minimum).
• The optimal cost is
√
ρ( 1 + 4ρ − 1)
J= √ .
2ρ − 1 + 4ρ + 1
• For low cost (“cheap”) control, let ρ → 0. Then K opt → 1 since
√
−1 + 1 + 4ρ 2(1 + 4ρ)−1/2
lim = lim = 1,
ρ→0 2ρ ρ→0 2
which is deadbeat control; closed-loop eigenvalues at 0.
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
ECE4520/5520, LINEAR QUADRATIC REGULATOR 7–4
1
• For high cost (“expensive”) control, let ρ → ∞ then K opt → √ , which
ρ
is a small (as expected) feedback which just barely stabilizes the
1
system, but plant input is small. Closed loop eigenvalue at 1 − √
PSfrag replacements ρ
which is < 1.
• Then
∗
J18 = min {J15 + J58, J12 + J24 + J46 + J68, . . .} .
• If x T Qx > 0 for all x 6= 0 then Q is positive definite (we write Q > 0).
• Therefore,
−1
u ∗ [N − 1] = − R + B T P B B T P Ax[N − 1].
• The exciting point is that the optimal u[N − 1], with no constraints on
its functional form, turns out to be a linear state feedback! To ease
notation, define
T
−1 T
K N −1 = R + B P B B PA
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
ECE4520/5520, LINEAR QUADRATIC REGULATOR 7–8
such that
u ∗[N − 1] = −K N −1 x[N − 1].
so that
JN∗ −1,N = x T [N − 1]PN −1 x[N − 1].
• Now, we take another step backwards and compute the cost J N −2,N
JN −2,N = JN −2,N −1 + JN −1,N .
where
T
−1
K N −2 = R + B PN −1 B B T PN −1 A.
• In general,
u ∗[k] = −K k x[k]
where
T
−1
K k = R + B Pk+1 B B T Pk+1 A
and
Pk = (A − B K k )T Pk+1(A − B K k ) + Q + K kT R K k ,
2 k2
1
1.5 k1
0
1
Value
Value
0.5
−1
0
−2
−0.5
−3 −1
0 2 4 6 8 10 1 2 3 4 5 6 7 8 9
Time sample, k Time sample, k
40
Value
P12=P21
30
P22
20
10
0
1 2 3 4 5 6 7 8 9 10
Time sample, k
and
−1
K = R + B T Pss B B T Pss A
EXAMPLE : For the previous example (with a finite end time), the solution
reached for P1 was
" #
49.5336 28.5208
P1 = .
28.5208 20.8434
In Matlab, dare(A,B,Q,R) for the same system gives
" #
49.5352 28.5215
Pss = .
28.5215 20.8438
So, we see that the system settles very quickly to steady-state
behavior.
• There are many ways to solve the the D.A.R.E., but when Q has the
form C T C, and the system is SISO, there is a simple method which
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
ECE4520/5520, LINEAR QUADRATIC REGULATOR 7–12
Chang-Letov Method
DT
D
λ[k + 1] BT y[k]
x[k + 1] x[k]
u[k] B z −1 C −C T A−T z −1 λ[k]
1.5
EXAMPLE : Let 1
Imag Axis
(z + 0.25)(z 2 + z + 0.5) 0.5
G(z) = PSfrag
. 0
(z − 0.2)(z 2 − 2z +replacements
2) −0.5
−1.5
−2
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
Real Axis
OBSERVATIONS : For the “expensive cost of control” case, stable poles
remain where they are and unstable poles are mirrored into the unit
disc. (They are not moved to be just barely stable, as we might
expect!)
For the “cheap cost of control” case, poles migrate to the finite zeros
of the transfer function, and to the origin (deadbeat control).
• The minimum cost is the cost to go from x(to) to x(to + δt) plus the
optimal cost to go from x(to + δt) to x(t f ). The latter part includes the
terminal cost.
• So,
T
1 ∂ V (x, t)
u ∗ (to) = − R −1 B T ,
2 ∂x
x o ,to
• Let x(t) be the state that corresponds to an input u(t) and an initial
condition z. Then,
Z t
x(t) = e At z + e A(t−τ ) Bu(τ ) dτ.
0
• Now, denote by x̃(t) the state that corresponds to an input λu(t) and
an initial condition λz. Then,
Z t
x̃(t) = λ e At z + e A(t−τ ) Bu(τ ) dτ = λx(t).
0
Thus,
Z tf
2 T 2
J (λz, λu, to ) = λ x (t f )Pt f x(t f ) + λ x T (t)Qx(t) + u T (t)Ru(t) dt
to
= λ2 J (z, u, to )
and
V (λz, to ) = λ2 V (z, to ).
PROPERTY II : Let u and ũ be two input sequences, and let z and z̃ be two
initial states. We will show that
J (z + z̃, u + ũ, to ) + J (z − z̃, u − ũ, to ) = 2J (z, u, to ) + 2J (z̃, ũ, to )
• Suppose
ẋ(t) = Ax(t) + Bu(t), x o = z;
˙ = A x̃(t) + B ũ(t),
x̃(t) x̃ o = z̃.
• Therefore,
J (z + z̃, u + ũ, to ) + J (z − z̃, u − ũ, to ) = 2J (z, u, to ) + 2J (z̃, ũ, to ).
PROPERTY III : Next, minimize the RHS with respect to u(t) and ũ(t).
Conclude
V (z + z̃, to ) + V (z − z̃, to ) ≥ 2V (z, to ) + 2V (z̃, to ).
• Minimizing
min {J (z + z̃, u + ũ, to ) + J (z − z̃, u − ũ, to )}
u,ũ
Now,
RHS = 2V (z, to ) + 2V (z̃, to )
but
LHS ≤ V (z + z̃, to ) + V (z − z̃, to )
PROPERTY IV: Apply the above inequality with (z + z̃)/2 substituted for z
and (z − z̃)/2 substituted for z̃ to get:
V (z + z̃, to ) + V (z − z̃, to ) = 2V (z, to ) + 2V (z̃, to ).
• Substitute asdirected.
z + z̃ z − z̃
2V , to + 2V , to ≤ V (z, to ) + V (z̃, to ).
2 2
• By scalar multiplication principle,
2 2
V (z + z̃, to ) + V (z − z̃, to ) ≤ V (z, to ) + V (z̃, to ).
4 4
• Multiply both sides by 2
V (z + z̃, to ) + V (z − z̃, to ) ≤ 2V (z, to ) + 2V (z̃, to ),
• So, the gradient is linear. This means that ∇V (z, to ), which is a vector,
is linear in z and hence has a matrix representation
∇V (z, to ) = M(to )z
where M(to ) ∈
n×n
.
PROPERTY VIII : Now, integrate away to show the desired result. Note
that V (0, to ) = 0.
Z 1 Z 1
V (z, to ) = (M(to )(θ z))T z dθ = z T M T (to )zθ dθ
0 0
2 1
θ
= z T M T (to)z
2 0
= z T M T (to)z/2.
M(to ) + M T (to )
• Therefore, P(to) = . Also, P(to) ≥ 0 since
4
J (z, u, to ) ≥ 0 for all u, z. Thus
V (z, to ) = min J (z, u, to ) = z T P(to)z ≥ 0 ∀ z,
u
• This expression is valid for all to . Also note that we can write
2x T (to )P(to)Ax(to) = x T (to)P(to)Ax(to) + x T (to )A T P(to)x(to),
so
T −1 T T
0 = x (t) Ṗ(t) + Q − P(t)B R B P(t) + P(t)A + A P(t) x(t)
Steady-State Solution
backward in time.
• The problem we discover is that Matlab’s integration routines
ode45.m will only work on vector differential equations, not matrix
differential equations such as this.
• The Kronecker product ⊗ comes to the rescue once again, along with
the matrix stacking operator. We can write the above matrix
differential equation as a vector differential equation:
T T
T
−1 T
Ṗst = A ⊗ I + I ⊗ A Pst + Q st − P ⊗ P B R B st .
A “−” sign has been introduced in order for the forward-time ode45.m
(for example) to work on the backward-time equation.
• In Matlab
pdot=(kron(A’,eye(size(A)) + kron(eye(size(A)),A’)) ...
*st(P) + st(Q) - kron(P’,P)*st(B*inv(R)*B’);
Solve the differential matrix Riccati equation that results in the control
signal that minimizes the cost function
" # Z 5
2 0
J = x T (5) x(5) + [y T (t)y(t) + u T (t)u(t)] dt.
0 2 0
• First, note that the open-loop system is unstable, with poles at 0 and
1. It is controllable and observable.
• The cost function is written in terms of y(t) but not x(t). However,
since there is no feedthrough term, we can also write it as
" # Z 5
2 0 T
J = x T (5) x(5) + x (t)C T C x(t) + u T (t)u(t) dt.
0 2 0
st(B*inv(R)*B’) 2
5 tvec 1
P12=P21
0.5
Clock 0
0 1 2 3 4 5
To plot: plot(tvec.signals.values,pvec.signals.values) Time (sec)
• The final equation gives us p12 = ±1. If we select p12 = −1 then the
first equation will have complex roots (bad). So, p12 = 1.
√ √
• Then, p11 = 1 ± 5. If p11 = 1 − 5 then P cannot be positive
√
definite. Therefore, p11 = 1 + 5 = 3.236.
√
• Finally, we get p22 = 5/2 = 1.118.
• These are the same values as the steady-state solution found by
integrating the differential Riccati equation.
• The static feedback control signal is
h i
−1 T
u(t) = −R
B Pss x(t) = − 3.236 1 x(t).
√ √
5 3
For this feedback, the closed-loop poles are at − ± j (stable).
2 2
• For a SISO system, we can easily plot the locus of closed-loop poles.
• Tradeoff between control effort and output error is evident.
• Consider the infinite-horizon LQR problem with Q = C T C, C ∈
1×n
and R = ρ, ρ ∈ .
• The cost function is then Z
∞
J= y 2(t) + ρu 2(t) dt.
0
or
C T C = P(s I − A) + (−s I − A T )P + P Bρ −1 B T P.
• Multiply both sides on the left by B T (−s I − A T )−1 and on the right by
(s I − A)−1 Bρ −1.
1 T T T
−1
B (−s I − A )C C(s I − A) B =
ρ
B T (−s I − A T )−1 |P Bρ −1
+ ρ −1 T
{z } | {z } B P (s I − A) −1
B
KT K
sI − A B I p
M2 =
−K 1 K r
s I − A + B K (s I − A) p + br
=
0 − Kp +r
det(M2) = det(s I − A + B K ) det(−K p + r)
= det(s I − A)(1 + K (s I − A)−1 B) det(−K p + r),
or
det(s I − A + B K ) χcl (s)
1 + K (s I − A)−1 B = = .
det(s I − A) χol (s)
• So, from before, we have
χcl (s)χcl (−s) 1
= 1 + G T (−s)G(s)
χol (s)χol (−s) ρ
4
= 1(s).
A − AT
2
EXAMPLE : Let
Imag Axis
1 1
G(s) = . 0
(s − 1.5)(s 2 +PSfrag replacements
2s + 2)
−1
−3
−3 −2 −1 0 1 2 3
Real Axis
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
ECE4520/5520, LINEAR QUADRATIC REGULATOR 7–32
1.5 3
1 2
0.5 1
Amplitude
Amplitude
0 0
−0.5 −1
−1 −2
−1.5 −3
−2 −4
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6
Time (sec) Time (sec)
0.8
1 0.8
←− K (1 − e−t/τ )
e−σ t e
y(t) × K
0.6
e−σ t 0.6 System response.
h(t)
h(t) K = dc gain
y(t) × K 0.4 1 0.4
←−
(1 − e−t/τ ) e Response to init. condition
tem response. 0.2 0.2
−→ 0.
= dc gain
o init. condition 0
0 1 2 3 4 5
0
0 1 2 3 4 5
t =τ t =τ
Time (sec × τ ) Time (sec × τ )
• If a system has complex-conjugate poles, each may be written as:
ωn2
H (s) = 2PSfrag replacements
.
s + 2ζ ωn s + ωn2
We can extract two more parameters from this equation:
p
σ = ζ ωn and ωd = ωn 1 − ζ 2.
(s)
• σ plays the same role as above—it specifies
θ = sin−1 (ζ )
decay rate of the response.
• ωd is the oscillation frequency of the output. ωn (s)
Note: ωd 6= ωn unless ζ = 0. ωd
σ
• ζ is the “damping ratio” and it also plays a
role in decay rate and overshoot.
• Second-order system responses:
placements PSfrag replacements
Impulse Responses Step Responses
0.1 1 0.1 2
ζ =0 ζ =0
0.3 0.3
0.2 0.2
0.5 0.5
0.5 1.5
0.4
0.7 0.7 0.4
0.6 0.8
0.9 0.9 0.6
y(t)
y(t)
0 ζ =1 1
0.7 ζ = 1 0.8 0.7
0.8
0.9 0.9
1.0 −0.5 0.5 0.8
1.0
−1 Impulse Responses 0
Responses 0 2 4 6
ωn t
8 10 12 0 2 4 6
ωn t
8 10 12
PSfrag replacements
50
• Settling time ts = time until 40
permanently within 1% ofPSfrag
final value.
replacements 30
20
• Overshoot M p = maximum 10
PERCENT overshoot. 0
0 0.2 0.4 0.6 0.8 1.0
ζ
Linear Functions
Linear Equations
• The nullspace of A ∈
m×n
is defined as
(A) = x ∈ n Ax = 0 .
•
(A) is the set of vectors mapped to zero by y = Ax.
•
(A) gives ambiguity in x given y = Ax: If y = Ax and z ∈
(A) then
y = A(x + z).
Range-Space
• The rangespace of A ∈
m×n
is defined as
(A) = Ax x ∈ n ⊆
m
.
•
(A) is the set of vectors that can be generated by y = Ax.
•
(A) is the span of the columns of A.
Rank
• We define the rank of A ∈ m×n as
rank(A) = dim (A).
• rank(A) = rank(A T ).
• rank(A) is maximum number of independent columns (or rows) of A.
Hence, rank(A) ≤ min(m, n).
• rank(A) + dim( (A)) = n.
Interpreting y = Ax
where a j ∈
m
.
• Then, y = Ax can be written as
y = x 1 a1 + x 2 a2 + · · · + x n an
• The square matrix A satisfies its own characteristic equation. That is,
if
χ(λ) = det(λI − A) = 0
then
χ(A) = 0.
where u(t) is the input, y(t) is the output, x(t) is the “state”, A, B, C,
D are constant matrices.
• So,
Y (s)
= C(s I − A)−1 B + D,
U (s)
but
adj(s I − A)
(s I − A)−1 = .
det(s I − A)
• Characteristic equation for the system is χ(s) = det(s I − A) = 0.
• Poles of system are roots of det(s I − A) = 0, the eigenvalues.
• In transfer function matrix form, G(s) = C(s I − A)−1 B + D, a pole of
any entry in G(s) is a pole of the system.
D
u y
xdot 1 x
1 K K 1
s
B C
Note: All (square) gain blocks
K are MATRIX GAIN blocks
from the Math Library.
A
b1 s 2 + b 2 s + b 3
G(s) = 3
s + a1 s 2 + a2 s + a3
Controller Canonical Form
...
y (t) −a1 −a2 −a3 ÿ(t) 1
ẋ(t) = ÿ(t) = 1 0 0 ẏ(t) + 0 u(t)
frag replacements
ẏ(t) 0 1 0 y(t) 0
h i
y(t) = b1 b2 b3 x(t).
b1
b2 y(t)
R x 1c R x 2c R x 3c
u(t) b3
−a1
−a2
−a3
−a3 0 0 b3
h i
y(t) = 1 0 0 x(t).
u(t)
b3 b2 b1
R x 3o R x 2o R x 1o
y(t)
• First, compute
1 0 0 β1 b1
a 1 1 0 β2 = b 2 .
a2 a1 1 β3 b3
• Then,
0 0 −a3 1
ẋ(t) = 1 0 −a2 x(t) + 0 u(t)
0 1 −a1 0
h i
y(t) = β1 β2 β3 x(t).
β1
β2 y(t)
x 1co x 2co
R R R x 3co
u(t) β3
u(t)
β3 β2 β1
−a1
−a2
−a3
• Factor
r1 r2 rn
G(s) = + + ···+ .
s − p1 s − p2 s − pn
c2
c1
• Thus, Jordan blocks yield repeated poles and terms of the form t p eλt
in e At .
Blocking Zero
Transmission Zero
• Zero at frequency z i if
zi I − A B
rank < n + min{ p, q}
placements −C D
STATE-SPACE DYNAMIC SYSTEMS (DISCRETE-TIME)
v(t)
(z)
rag replacements
π
j
T
• The subscript “d” is used here to emphasize that, in general, the “A”,
“B”, “C” and “D” matrices are DIFFERENT for discrete-time and
continuous-time systems, even if the underlying plant is the same.
• I will usually drop the “d” and expect you to interpret the system from
its context.
• Then
−1 −1
Y (z) = [C(z
| I − A)
{z B + D]
} U (z) + C(z
| I − {z zx[0]} .
A)
transfer function of system response to initial conditions
• So,
Y (z)
= C(z I − A)−1 B + D
U (z)
• Same form as for continuous-time systems.
• Poles of system are roots of det(z I − A) = 0.
Forced Solution
• The full solution is:
k−1
X
x[k] = Ak x[0] + Ak−1− j Bu[ j] .
j =0
| {z }
convolution
• Then,
x[k + 1] = Ad x[k] + Bd u[k]
Z T
where Ad = e AT , Bd = e Aσ B dσ = A−1(Ad − I )B.
0
• Similarly,
y[k] = C x[k] + Du[k].
That is, C d = C; Dd = D.
• Define
C
CA
(C, A) =
...
,
C An−1
If
(C, A) is full rank, then the system is observable.
• Define
n−1
(A, B) = B AB ··· A B .
• If a system is observable,
Z t
T
Wo(t) = e A τ C T Ce Aτ dτ
0
where Z t
ȳ(t) = y(t) − C e A(t−τ ) Bu(τ ) dτ − Du(t).
0
is nonsingular.
• In particular,
∞
X
Wdc = Am B B T (A T )m
m=0
is nonsingular.
• In particular,
∞
X
Wdo = (A T )m CC T Am
m=0
• Given a system
ẋ(t) = Ax(t) + Bu(t)
y(t) = C x(t) + Du(t),
the matrix T
T = [B AB · · · An−1 B] =
transforms the system into controllability form iff the original system is
controllable.
Minimality
D
r y
xdot 1 x
1 K K 1
u s
B C
K
Note: All (square) gain blocks
A are MATRIX GAIN blocks
from the Math Library.
K
Reference Input
N x rss .
xss = |{z}
n×1
0.8 0.8
Amplitude
Amplitude
0.6 0.6
0.4 0.4
0 0
0 2 4 6 8 10 12 14 16 0 0.5 1 1.5 2 2.5 3
0.8 0.8
Amplitude
Amplitude
0.6 0.6
0.4 0.4
0 0
0 2 4 6 8 10 12 14 16 0 0.5 1 1.5 2 2.5 3
Bessel pole locations for ωo = 1 rad/sec. Bessel pole locations for ts = 1 sec.
1: −1.000 1: −4.620
2: −0.866 ± 0.500 j 2: −4.053 ± 2.340 j
3: −0.746 ± 0.711 j −0.942 3: −3.967 ± 3.785 j −5.009
4: −0.657 ± 0.830 j −0.905 ± 0.271 j 4: −4.016 ± 5.072 j −5.528 ± 1.655 j
5: −0.591 ± 0.907 j −0.852 ± 0.443 j −0.926 5: −4.110 ± 6.314 j −5.927 ± 3.081 j −6.448
6: −0.539 ± 0.962 j −0.800 ± 0.562 j −0.909 ± 0.186 j 6: −4.217 ± 7.530 j −6.261 ± 4.402 j −7.121 ± 1.454 j
8–31
ECE4520/5520, REVIEW OF MULTIVARIABLE CONTROL 8–32
2. Find the nth-order poles from the table of constant settling time,
and divide pole locations by ts .
3. Use Acker/place to locate poles. Simulate and check control effort.
KI u(t) A, B, C, D y(t)
r (t)
s
x
K
Nx
and THEN design K I and K such that the system had good
closed-loop pole locations.
• Note that we can include the integral state into our normal
state-space form by augmenting the system dynamics
" # " #" # " # " #
ẋ I (t) 0 −C x I (t) 0 I
= + u(t) + r(t)
ẋ(t) 0 A x(t) B 0
y(t) = C x(t) + Du(t).
• Note that the new “A” matrix has an open-loop eigenvalue at the
origin. This corresponds to increasing the system type, and integrates
out steady-state error.
• The control law is,
" #
h i x I (t)
u(t) = − K I K + K N x r(t).
x(t)
• So, we now have the task of choosing n + n I closed-loop poles.
KI z u[k] A, B, C, D y[k]
r [k]
z−1
x
K
Nx
• We can include the integral state into our normal state-space form by
augmenting the system dynamics
" # " #" # " # " #
x I [k + 1] 1 −C x I [k] 0 I
= + u[k] + r[k]
x[k + 1] 0 A x[k] B 0
y[k] = C x[k] + Du[k].
w(t)
x(t) y(t)
u(t) A, B C
x̂(t) ŷ(t)
A, B C
or, x̂(t) → x(t) if A − LC is stable, for any value of x̂(0) and any u(t),
whether or not A is stable.
• In fact, we can look at the dynamics of the state estimate error to
quantitatively evaluate how x̂(t) → x(t).
˙ = (A − LC) x̃(t)
x̃(t)
• So, for our estimator, we specify the convergence rate of x̂(t) → x(t)
by choosing desired pole locations: Choose L such that
χob,des (s) = det (s I − A + LC) .
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
ECE4520/5520, REVIEW OF MULTIVARIABLE CONTROL 8–37
K
y
L
2
u
1 xhat
1 K K
s yhat
B C
K
Note: All (square) gain blocks
A are MATRIX GAIN blocks
from the Math Library.
K
0
1
x[k] y[k]
u[k] A, B C
x̂ p [k] ŷ[k]
A, B C
• Now that we have a structure to estimate the state x(t), let’s feed
back x̂(t) to control the plant. That is,
u(t) = r(t) − K x̂(t),
c 2001, 2000, Gregory L. Plett
Lecture notes prepared by Dr. Gregory L. Plett. Copyright
ECE4520/5520, REVIEW OF MULTIVARIABLE CONTROL 8–39
where K was designed assuming that u(t) = r(t) − K x(t). Is this
going to work? How risky is it to interconnect two well-behaved,
stable systems? (Assume r(t) = 0 for now).
• Our combined closed-loop-system state equations are
" # " #" #
ẋ(t) A − BK BK x(t)
= .
˙x̃(t) 0 A − LC x̃(t)
• The 2n closed-loop poles of the combined regulator/estimator system
are the n eigenvalues of A − B K combined with the n eigenvalues of
A − LC.
The Compensator—Continuous-Time
U (s)
= −K (s I − A + B K + LC − L D K )−1 L .
D(s) =
Y (s)
The Compensator—Discrete-Time, Prediction-Estimator Based
U (z) −1
D(z) = = −K z I − A + B K + L p C − L p D K L p.
Y (z)
• “Time update”: Predict new state from old state estimate and system
dynamics
x̂ p [k] = A x̂ c [k − 1] + Bu[k − 1].
Reduced-Order Estimator/Compensator
• Why construct the entire state vector when you are directly measuring
a state? If there is little noise in your sensor, you get a great estimate
by just letting
x̂1 = y (C = [1 0 . . . 0]).
• Reduced-order estimator:
x̂b [k + 1] = Abb x̂b [k] + Aba xa [k] + Bb u[k] +
L r xa [k + 1] − Aaa xa [k] − Ba u[k] − Aab x̂b [k] .
DESIGN METHOD :
K1
u(t)
R x(t)
r (t) B C y(t)
DESIGN METHOD :
where
−1
K k = R + B T Pk+1 B B T Pk+1 A
and
Pk = (A − B K k )T Pk+1(A − B K k ) + Q + K kT R K k ,
and
T
−1
K = − R + B Pss B B T P A.
• We wish to minimize
Z tf
T
J (xo , u, to ) = x (t f )Pt f x(t f ) + x T (t)Qx(t) + u T (t)Ru(t) dt.
t0
• There are many ways to solve the the C.A.R.E., but when Q has the
form C T C, and the system is SISO, a variant of the Chang-Letov
method may be used: