Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
where vi(t) is the input voltage and v0(t) is the output voltage. 2
Consider the first amplifier. Using nodal analysis,
v1 − vi v1 v1 − v0
+ + + ic1 + ic2 = 0
R1 R2 R6
e1 − v 2
− ic1 + =0
R3
The second amplifier yields
e 2 − v 2 e2 − v 0
+ =0
R4 R5
Assuming infinite input impedance, we get e1 = e2 = 0 . Hence,
R5
v2 = − R3ic1 and v0 = − v2
R4
Furthermore, the two currents are described by
3
dvc d dv1
ic = c1 1
= c1 ( v1 − e1 ) = c1
1
dt dt dt
R5 ⎛ dv1 ⎞ R3 R5 dv1
v0 = −
⎜ 31R c ⎟ = − c1
R4 ⎝ dt ⎠ R4 dt
dv2 R dv
=− 4 0
dt R5 dt
dv1 R4 R4
dt
=−
R3 R5 c1
v0 ⇒ v1 =
R3 R5 c1 ∫ v0 dτ
4
⎛ 1 1 1 ⎞ vi v0 d v1 ⎛ d v1 dv2 ⎞
v1 ⎜ + + ⎟ − − + c 1 + c 2 ⎜ − ⎟ = 0
⎝ R1 R2 R 6 ⎠ R1 R6 dt ⎝ dt dt ⎠
In terms of the input and output voltages,
R4 ⎡1 1 1 ⎤ vi v0 R4 R4 c2 ⎡ R4 dv0 ⎤
R3 R5 c1 ∫ v0 dτ ⎢ + + ⎥ − − +
⎣ R1 R2 R6 ⎦ R1 R6 R3 R5
v0 +
R3 R5c1
v0 − c2 ⎢−
⎣ R5 dt ⎦
⎥=0
⎪⎧ R4 ⎡ 1 1 1⎤ 1 dvi ⎡⎛ c2 ⎞ R4 1 ⎤ dv0 R4 c2 d 2 v0 ⎫⎪ R5
⎨ ⎢ + + ⎥ v0 − + ⎢⎜ 1 + ⎟ − ⎥ + 2
= 0⎬
R R c R
⎪⎩ 3 5 1 ⎣ 1 R2 R6⎦ R1 dt ⎣⎝ c R R
1 ⎠ 3 5 R6⎦ dt R 5 dt ⎭⎪ R4
⎡ 1 ⎡1 1 1⎤ R5 dvi ⎡⎛ c2 ⎞ 1 R5 ⎤ dv0 d 2 v0 ⎤1
⎢ ⎢ + + ⎥ v0 − + ⎢⎜ 1 + ⎟ − ⎥ + c2 2 = 0 ⎥
R c R
⎣⎢ 3 1 ⎣ 1 R2 R 6⎦ R R
1 4 dt ⎣⎝ c R
1 ⎠ 3 R R
4 6⎦ dt dt ⎦⎥ c2
or
d 2 v0 ⎡⎛ 1 1 ⎞ 1 R5 ⎤ dv0 1 ⎡1 1 1⎤ R5 dvi
+ ⎢⎜ + ⎟ − ⎥ + ⎢ + + ⎥ v0 =
dt 2 ⎣⎝ c2 c1 ⎠ R3 R4 R6 c2 ⎦ dt R3c1c2 ⎣ R1 R2 R6 ⎦ R1 R4 c2 dt
5
The last differential equation represents the time domain model of the active filter.
In the complex frequency domain, assuming zero initial conditions, the algebraic
relationship between input and output is
R5
s
R1 R4 c2
V0 ( s ) = Vi ( s )
⎡⎛ 1 1 ⎞ 1 R5 ⎤ 1 ⎡1 1 1⎤
s + ⎢⎜ + ⎟ −
2
⎥ s + +
⎢R R R ⎥ +
c
⎣⎝ 2 c R
1 ⎠ 3 R R c
4 6 2⎦ R3c1c2 ⎣ 1 2 6⎦
Since both models assume linear behavior of the active amplifier circuits, we
could also obtain an input-output model in terms of the convolution relationship in
the time domain.
6
Input-Output Linearity:
A system N is said to be linear if whenever the input u1 yields the output N[u1],
the input u2 yields the output N[u2], and
N [c1u1 + c 2 u 2 ] = c1 N [u1 ] + c 2 N [u 2 ]
Example:
x
fk ( x )
fa (t )
M&x& + Bx& + Kx 2 = f a (t )
This time, however, x(t) ≠ x1(t) + x2(t) , i.e., the linearity property does not hold
because the system is now nonlinear.
Let N represent a system and y(•) be the response of such system to the input
stimulus u(•), i.e., y(•) = N[u(•)]. If for any real T, y(• - T) = N[u(• - T)], then the
system is said to be time invariant . In other words, a system is time invariant if
delaying the input by T seconds merely delays the response by T seconds.
8
Let the system be linear and time invariant with impulse response h(t), then
t t
y (t ) = ∫ u(τ )h(t − τ )dτ = ∫ h(τ )u(t − τ )dτ
−∞ −∞
If the same system is also causal, then for t ≥ τ ≥ 0,
t t
∫ ∫
y (t ) = u (τ )h(t − τ )dτ = h(τ )u (t − τ )dτ
0 0
y& (t ) + ay (t − 1) = b0 u (t ) + b1u (t + 1)
Example:
di di R 1
−vi + Ri + L =0⇒ = − i + vi
dt dt L L
And the solution is given by
t −R
1 (t −τ )
i (t ) =
L∫e
t0
L vi (τ )dτ + e−(t −t0 ) i (t0 ) , t ≥ t0
Hence, regardless of what vi(t) is for t < t0, all we need to know to predict the
future of the output y(t) is the initial state i(t0) and the input voltage vi(t), t ≥ t0.
State Models
They are elegant, though practical, mathematical representations of the
behavior of dynamical systems. Moreover,
• A rich theory has already been developed
• Real physical systems can be cast into such a representation
11
Example:
R 1
λ2 + λ+ =0
L LC
then, for t ≥ t0,
i (t ) = C1e λ1 (t −t0 ) + C 2 e λ2 ( t −t0 )
i (t 0 ) = C1 + C 2
di (t )
t =t0 = C1λ1 + C 2 λ 2
dt
di (t )
From the knowledge of i(t0) and vc(t0) we can compute t =t 0
dt
and therefore C1 and C2.
13
Using a state variable approach, let x1(t) = vc(t) and x2(t) = i(t), then for t ≥ t0
dx1 (t ) dvc (t ) 1 1
= = i (t ) = x 2 (t )
dt dt C C
1 ⎛⎜ 1 ⎞
t
dx 2 (t ) di (t ) R ⎟ = − 1 x (t ) − R x (t )
L ⎜⎝ C t∫0
= = − i (t ) − i (τ ) dτ + v (t )
dt dt L
C 0
⎟ L
1
L
2
144424443⎠
or vC ( t )
⎡ 1 ⎤
d ⎡ x1 (t ) ⎤ ⎢ 0 C ⎥ ⎡ x1 (t ) ⎤ , ⎡ x1 (t 0 ) ⎤ ⎡vc (t 0 )⎤
=⎢ ⎢ x (t )⎥ = ⎢ i (t ) ⎥
⎢ ⎥
dt ⎣ x 2 (t )⎦ ⎢− 1 R ⎥ ⎢⎣ x 2 (t )⎥⎦
− ⎥ ⎣ 2 0 ⎦ ⎣ 0 ⎦
⎣ L L⎦
or x& (t ) = Ax (t ) , x (t0 ).
This is a first-order linear, constant coefficient vector differential equation! In
principle, its solution should be easy to find.
14
Specifically, for t ≥ t0 x (t ) = e A(t −t0 ) x (t0 )
The solution to the vector state equation is more elegant, easier to obtain
(provided there is an algorithm to compute eAt) and it specially makes the role
of the initial conditions (state) clear.
D(t)
x′(t) x(t)
B(t) ∫ C(t)
u(t) y(t)
A(t)
Definitions:
The zero-input state response is the response x(•) given x(t0) and u(•) ≡ 0.
The zero-input system response is the response y(•) given x(t0) and u(•) ≡ 0.
The zero-state state response is the response x(•) to an input u(•) whenever
x(t0)=0.
The zero-state system response is the response y(•) to an input u(•) whenever
x(t0)=0.
Let yzi(•) be the zero-input system response and yzs(•) be the zero-state system
response, then the total system response is given by y(•) = yzi(•) + yzs(•).
16
Example: input force u ( t )
output position y ( t )
Now, − u (t ) + 3v (t ) + 0.5v&(t ) = 0
or v&(t ) = −6v (t ) + 2u (t )
or &y&(t ) = −6 y& (t ) + 2u (t ) , y (t 0 ) , y& (t 0 ) .
Let x1 (t ) = y (t ) and x 2 (t ) = y& (t ) = v(t ) , then for t ≥ t0, the state model is
⎡ x&1 (t ) ⎤ ⎡0 1 ⎤ ⎡ x1 (t ) ⎤ ⎡0⎤ ⎡ x1 (t 0 ) ⎤
x& (t ) = ⎢ ⎥ =⎢ ⎥ ⎢ ⎥ + ⎢ ⎥u (t ) , x(t 0 ) = ⎢ ⎥
⎣ x& 2 (t )⎦ ⎣0 − 6⎦ ⎣ x 2 (t )⎦ ⎣2⎦ ⎣ x 2 (t 0 )⎦
⎡ x1 (t ) ⎤
y (t ) = [1 0] ⎢ ⎥
⎣ x 2 (t )⎦
17
Now, the state solution is given by
t
x(t ) = e A ( t −t 0 )
x(t 0 ) + ∫ e A(t −τ ) Bu (τ )dτ
t0
⎡ 1 1 −6t ⎤
1 − e ⎥
e At =⎢ 6 6
⎢0 e −6t ⎥
⎣ ⎦
⎡ 1 1 −6t ⎤ x ⎡ ⎛ 1 1 −6t ⎞ ⎤
1 − e ⎥ 10 ⎡ ⎤ ⎢ 10 ⎜ 6 − 6 e ⎟ x 20 ⎥
x +
x(t )= e x(0) = ⎢
At
6 6 ⎢x ⎥ ⎢ = ⎝ ⎠ ⎥
⎢0 e −6t ⎥ ⎣ ⎦ −6t
⎣ ⎦ 20
⎢⎣ e x 20 ⎥⎦
18
2. Zero-input system response: u(t) = 0, t ≥ 0 and x(0) ≠0.
⎡ ⎛ 1 1 −6t ⎞ ⎤
⎢ 10 ⎜ − e ⎟ x20 ⎥
x + ⎛ 1 1 −6t ⎞
y (t ) = Cx(t ) = Ce At x(0) = [1 0] ⎢ ⎝ 6 6 ⎠ ⎥ 10 ⎜ − e ⎟ x20
= x +
⎢ −6t ⎥ ⎝6 6 ⎠
⎣ e x 20 ⎦
3. Zero-state state response: x(0) = 0.
t t t ⎡ 1 1 −6 ( t −τ ) ⎤ 0
⎡ ⎤
1 − e
x(t ) = ∫ e A( t −τ ) Bu s (τ )dτ = ∫ e A(t −τ ) Bdτ = ∫ ⎢ 6 6 ⎥ ⎢ ⎥ dτ
⎢
0 0 e − 6 ( t −τ ) ⎥ ⎣ 2⎦
0 0
⎣ ⎦
⎡ t ⎛ 1 1 −6(t −τ ) ⎞ ⎤ ⎡ 1
t ⎡1 1 −6(t −τ ) ⎤
− e
⎢∫ ⎜ − e
0⎝
3 3
⎟dτ ⎥ t
⎠ ⎥ ⎢ 3 18
−
1
1 − (
e −6t
)⎤⎥
⎢
=∫ 3 3 ⎥dτ = ⎢ =⎢ ⎥
⎢ ⎥ 1
( )
t
⎢ − 6 ( t −τ ) ⎥ ⎢ − −6t
⎥
⎢ ∫ 2e 1 e
τ
0
⎣ 2e ⎦
− 6 ( t − )
dτ ⎥ ⎣ 3
⎣ 0 ⎦ ⎦
19
4. Zero-state system response: x(0) = 0.
⎡1
t −
⎢ 3 18
1
1 −( e −6t
)⎤⎥
y (t ) = Cx(t ) = [1 0] ⎢ ⎥ =
1
t −
1
(
1 − e −6t
)
⎢
1
(
1 − e −6t ) ⎥ 3 18
⎣ 3 ⎦
t ⎡ ⎛ 1 1 −6t ⎞ ⎤ ⎡⎢ t −
x + ⎜ − e ⎟ x 20 ⎥ 3 18
1 1
1 − e −6t
( )⎤⎥
x(t ) = e At x(0) + ∫ e A(t −τ ) Bu (τ )dτ = ⎢ 10 ⎝ 6 6 ⎠ ⎥+⎢ 1 ⎥
0
⎢
⎢⎣ e −6t x 20 ⎥⎦ ⎢ (
1 − e −6t ) ⎥
⎣ 3 ⎦
and the complete system response by
⎡ x1 (t ) ⎤
y (t ) = Cx(t ) = [1 0]⎢ ⎥ = x1 (t ) = x10 +
⎛ 1 1 −6t ⎞
⎜ − e ⎟ x 20 +
1
t −
1
(
1 − e −6t )
⎣ x 2 (t )⎦ ⎝6 6 ⎠ 3 18
20
State Models from Ordinary Differential Equations
Let a dynamic system be described by the nth order scalar differential equation
with constant coefficients, i.e.,
d 3 y (t ) d 2 y (t ) dy (t ) du (t ) d 2 u (t )
3
+ a1 2
+ a2 + a3 y (t ) = b3 u (t ) + b2 + b1
dt dt dt dt dt 2
d 3 y (t ) d 2 y (t ) dy (t ) du (t ) d 2 u (t )
3
= −a1 2
− a2 − a3 y (t ) + b3u (t ) + b2 + b1
dt dt dt dt dt 2
d2 d
= 2 (b1u (t ) − a 1 y (t )) + (b2 u (t ) − a 2 y (t )) + (b3 u (t ) − a3 y (t ))
dt dt 21
Integrating this last equation on step at a time, we get
d 2 y (t ) d
2
= (b1u (t ) − a 1 y (t )) + (b2 u (t ) − a 2 y (t )) + ∫ (b3 u (τ ) − a3 y (τ ))dτ
dt dt t
dy (t ) ⎡ ⎤
= (b1u (t ) − a 1 y (t )) + ∫ ⎢(b2 u (α ) − a 2 y (α )) + ∫ (b3 u (τ ) − a3 y (τ ))dτ ⎥ dα
dt t ⎣ α ⎦
⎧⎪ ⎡ ⎤ ⎫⎪
y (t ) = ∫ ⎨(b1u (σ ) − a 1 y (σ )) + ∫ ⎢(b2 u (α ) − a 2 y (α )) + ∫ (b3 u (τ ) − a3 y (τ ))dτ ⎥ dα ⎬dσ
t ⎪⎩ σ ⎣ α ⎦ ⎪⎭
This is the implicit solution of the original differential equation. This solution is
obtained via nested integration.
22
Block diagram representation:
⎡0 0 − a 3 ⎤ ⎡b3 ⎤
x& = ⎢⎢1 0 − a 2 ⎥⎥ x + ⎢⎢b2 ⎥⎥u = A0 x + B0 u
⎢⎣0 1 − a1 ⎥⎦ ⎢⎣ b1 ⎥⎦
y = [0 0 1]x = C 0 x
⎧ d 3 y (t ) d 2 y (t ) dy (t ) du (t ) d 2 u (t ) ⎫
L⎨ 3
+ a1 2
+ a2 + a3 y (t ) = b3 u (t ) + b2 + b1 2 ⎬
⎩ dt dt dt dt dt ⎭
or (s 3
) ( )
+ a1 s 2 + a 2 s + a 3 Y ( s ) = b3 + b2 s + b1 s 2 U ( s )
24
In transfer function form,
Y (s)
= 3
(
b3 + b2 s + b1 s 2
=
)
b1 s −1 + b2 s −2 + b3 s −3
U ( s) ( )
s + a1 s + a 2 s + a3 1 + a1 s −1 + a 2 s − 2 + a3 s −3
2
Y ( s) Y ( s) Yˆ ( s ) ⎛ 1 ⎞
= = ( b1s −1 + b2 s −2 + b3 s −3 ) ⎜ −3 ⎟
U ( s ) Yˆ ( s ) U ( s ) ⎝ 1 + a1 s −1
+ a2 s −2
+ a3 s ⎠
Yˆ ( s ) 1
where =
U ( s ) 1 + a1 s −1 + a 2 s − 2 + a3 s −3
Y (s)
and = b1 s −1 + b2 s − 2 + b3 s −3
Yˆ ( s )
Observation:
The overall transfer function is a cascade of two transfer functions.
25
Each of the transfer functions can be expressed in block diagram form, i.e.,
U (s) + Yˆ ( s ) s −1Yˆ ( s ) s −2Yˆ ( s ) s −3Yˆ ( s )
1 1 1
_ _Σ _ s s s
a1
a 2
a 3
+ Σ Y ( s)
+ +
b1 b2 b3
Yˆ ( s )
1 1 1
s s s
s −1Yˆ ( s ) s −2Yˆ ( s ) s −3Yˆ ( s )
b1 b2 b3
a1
a 2
a 3
Again, choosing the outputs of the integrators as the state variables, we get
x&1 = x 2
x& 2 = x3
x& 3 = −a3 x1 − a 2 x 2 − a1 x3 + u
27
y = b3 x1 + b2 x 2 + b1 x3
In matrix form,
⎡ 0 1 0 ⎤ ⎡0 ⎤
x& = ⎢⎢ 0 0 1 ⎥⎥ x + ⎢⎢0 ⎥⎥ u = Ac x + Bc u
⎣⎢ −a3 −a2 −a1 ⎦⎥ ⎣⎢1 ⎦⎥
y = [b3 b2 b1 ] x = Cc x
This form of the state equation is the so-called controllable canonical form.
Observation:
Both canonical forms are the dual of each other.
Consider the controllable canonical form of some linear time invariant dynamic
system, i.e.,
x& c = Ac xc + Bc u
y = C c xc
28
Then the observable canonical form is given by
x& 0 = AcT x0 + C cT u ⎫
⎬ ⇒ A0 = Ac
T
, B 0 = C T
c and C 0 = BcT
y = BcT x0 ⎭
x& (t ) = Ax(t ) + Bu (t )
y (t ) = Cx (t ) + Du (t )
29
where x(t) ∈ Rn, u(t), y(t) ∈ R, A,B,C and D are constant matrices of appropriate
dimensions.
Objective: Determine the initial state vector x(0)=[x1(0) … xn(0)]T from the initial
conditions y (0), y& (0),..., y ( n −1) (0) and the input initial values u (0), u& (0),..., u ( n −1) (0)
y (0) = Cx (0)
y& (0) = Cx& (0) = CAx (0) + CBu (0)
&y&(0) = C&x&(0) = CAx& (0) + CBu& (0) = CA 2 x(0) + CABu (0) + CBu& (0)
M
y ( n −1) (0) = CA n −1 x(0) + CA n − 2 Bu (0) + CA n −3 Bu& (0) + L + CABu ( n −3) (0) + CBu ( n − 2 ) (0)
30
In matrix form,
⎡ y (0) ⎤ ⎡ C ⎤ ⎡ 0 0 L L 0 ⎤ ⎡ u ( 0) ⎤
⎢ y& (0) ⎥ ⎢ CA ⎥ ⎢ CB 0 L L 0⎥⎥ ⎢⎢ u& (0) ⎥⎥
⎢ ⎥ ⎢ ⎥ ⎢
Y (0) = ⎢ &y&(0) ⎥ = ⎢ CA 2 ⎥ x(0) + ⎢ CAB CB O O M ⎥ ⎢ u&&(0) ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥
⎢ M ⎥ ⎢ M ⎥ ⎢ M M O O M ⎥⎢ M ⎥
⎢⎣ y ( n −1) (0)⎥⎦ ⎢⎣CA n −1 ⎥⎦ ⎢⎣CA n − 2 B CA n −3 B L CB 0⎥⎦ ⎢⎣u ( n −1) (0)⎥⎦
= Θx(0) + TU (0)
where Θ ∈ Rnxn, T ∈ Rnxn.
To get a unique solution x(0) for the last algebraic equation, it will be necessary
that the matrix Θ be non-singular, i.e.,
x(0) = Θ-1[y(0) – TU(0)].
Hence, to uniquely reconstruct the initial state x(0) from input and output
measurements, the system must be observable, i.e., Θ-1 must exist. In fact, Θ is
31
called the observability matrix.
Method 2: Suppose now that instead of using the input-output measurements to
reconstruct the state at time t = 0 we use impulsive inputs to change the value of
the state instantaneously,
Let
x& (t ) = Ax (t ) + Bu (t ), t ≥ 0 −
with x(0-) = x0, A ∈ Rnxn, B ∈ Rn, describe an nth order scalar differential
equation and
u (t ) = ξ 0δ (t ) + ξ1δ& (t ) + L + ξ n −1δ ( n −1) (t )
0−
t
= e At x(0− ) + ∫ e A(t −τ ) B ⎡⎣ξ0δ (τ ) + ξ1δ&(τ ) + L + ξ n −1δ ( n −1) (τ ) ⎤⎦ dτ
0−
32
But, the ith term in the integral can be rewritten as
t t
t
∫ −
e A(t −τ ) Bδ (i ) (τ )dτ = e A(t −τ ) Bδ (i −1) (τ )
0−
+ e At ∫ e− Aτ ABδ (i −1) (τ )dτ
0 0− t
t
= e A(t −τ ) ABδ (i − 2) (τ ) ∫ e− Aτ A2 Bδ (i − 2) (τ )dτ
At
+e
0−
0−
M
t
t
= e A(t −τ ) Ai −1 Bδ (τ )
0−
+ e At ∫ e − Aτ Ai Bδ (τ )dτ = e At Ai B
0−
Therefore, x(t) is given by
x (t ) = e At x (0 − ) + e At Bξ 0 + e At ABξ 1 + L + e At A n −1 Bξ n −1
{
= e At x (0− ) + ⎡ B AB L An −1 B ⎤ ξ%
⎣ ⎦ }
~
where ξ = [ξ 0 ξ1 L ξ n −1 ]T
33
At time t = 0+, we get
x (0 + ) = x (0 − ) + ⎡ B AB L An −1 B ⎤ ξ% = x (0 − ) + Qξ%
⎣ ⎦
where Q is the so-called controllability matrix.
Clearly, an impulsive input that will take the state from x(0-) to x(0+) will exist if
and only if the inverse of Q exists, namely,
ξ = Q −1 [x(0 + ) − x(0 − )]
~
35
In this case,
t10
At time tk,
x(t k ) ≅ x(t k −1 ) + (t k − t k −1 ) f (t k −1 , x(t k −1 ), u (t k −1 ))
36
For the kth time interval,
tk
∫ f (τ , x(τ ), u (τ ))dτ ≈ (t
t k −1
k − t k −1 ) f k
t10
37
In this case the integral is approximately equal to
t10
( f 0 + f1 ) ( f + f2 ) ( f + f10 )
∫
t0
f (τ , x(τ ), u (τ ))dτ ≈ (t1 − t 0 )
2
+ (t 2 − t1 ) 1
2
+ L + (t10 − t 9 ) 9
2
1
x(t k ) ≈ x(t k −1 ) + (t k − t k −1 )[ f (t k −1 , x(t k −1 ), u (t k −1 )) + f (t k , x(t k ), u (t k ))]
2
Example: Obtain an approximate solution of the following linearized pendulum
state model at equally spaced time instants, tk – tk-1 = 0.5.
⎡ x&1( t ) ⎤ ⎡ x2 ( t ) ⎤
⎢ x& ( t )⎥ = ⎢− 4 x ( t )⎥
⎣ 2 ⎦ ⎣ 1 ⎦
⎡ π⎤
with initial conditions ⎡ 1 ⎤ ⎢− ⎥
x (0)
⎢ x (0) ⎥ = ⎢ 40 ⎥
⎣ 2 ⎦
⎣ 0 ⎦
38
Forward Euler Method:
⎡ x1( t k ) ⎤ ⎡ x1( t k −1 )⎤ ⎡ x2 ( t k ) + x2 ( t k −1 ) ⎤
≅
⎢ x ( t )⎥ ⎢ x ( t )⎥ + 0 .5 ( t k − t )
k −1 ⎢ ⎥
⎣ 2 k ⎦ ⎣ 2 k −1 ⎦ ⎣− 4 x1( t k ) − 4 x1( t k −1 )⎦
⎡ x1( t k −1 ) + 0.25( x2 ( t k ) + x2 ( t k −1 ))⎤
=⎢ ⎥
x
⎣ 2 k −1 ( t ) − ( x (
1 k t ) + x ( t
1 k −1 )) ⎦
39
Linear Discrete-Time Systems
Implementation of dynamic systems is actually done using digital devices like
computers and/or DSPs. Moreover, there are some naturally occurring processes
which are discrete-time. Hence, it is convenient to model such systems as
discrete-time systems.
In most cases, the system is discretized at time t = tk. This is illustrated in the
figure below (a sampled-data system).
40
Consider a linear, discrete-time system described by
Since the system is linear, its behavior is described by the convolution relation.
Let tk = kT, then ∞
y (kT ) = ∑ h(kT − nT )u (nT )
n = −∞
⎧1, k = 0
δ (kT ) = ⎨
⎩0, otherwise
then, ∞
y (kT ) = ∑ h(kT − nT )δ (nT ) = h(kT )
n = −∞
y (k + n) + a1 y (k + n − 1) + a 2 y (k + n − 2) + L + a n −1 y (k + 1) + a n y (k )
= bm +1u (k ) + bm u (k + 1) + L + b1u (k + m)
42
If we now replace differentiations with forward shift operators and integrators with
backward shift operators then we can construct the same type of canonical
realizations that we built for continuous-time systems.
Example: Let n = 3, m = 2 and y(0) = y(1) = y(2) = u(0) = u(1) = u(2) = 0, then
y (k + 3) = b1u (k + 2) − a1 y (k + 2) + b2 u (k + 1) − a 2 y (k + 1) + b3 u (k ) − a 3 y (k )
Let us apply the backwards shift operator to this equation one at a time:
q −1 y (k + 3) = y (k + 2) = b1u (k + 1) − a1 y (k + 1) + b2 u (k ) − a 2 y (k ) + q −1 [b3 u ( k ) − a3 y (k )]
{
q −1 y (k + 2) = y ( k + 1) = b1u ( k ) − a1 y ( k ) + q −1 [b2 u ( k ) − a 2 y ( k )] + q −1 [b3 u ( k ) − a 3 y ( k )] }
{ {
q −1 y (k + 1) = y ( k ) = q −1 [b1u ( k ) − a1 y ( k )] + q −1 [b2 u (k ) − a 2 y ( k )] + q −1 [b3 u ( k ) − a 3 y (k )] }}
The solution y(k) can now be computed implicitly using a simulation block
diagram.
43
Simulation diagram implementation:
Using the outputs of the shift operators as the state variables, we get
x1 (k + 1) = −a3 x3 (k ) + b3 u (k )
x 2 (k + 1) = x1 (k ) − a 2 x3 (k ) + b2 u (k )
x3 (k + 1) = x 2 (k ) − a1 x3 (k ) + b1u (k )
y ( k ) = x3 ( k )
44
In matrix form,
⎡0 0 − a 3 ⎤ ⎡b3 ⎤
x(k + 1) = ⎢⎢1 0 − a 2 ⎥⎥ x(k ) + ⎢⎢b2 ⎥⎥u (k )
⎢⎣0 1 − a1 ⎥⎦ ⎢⎣ b1 ⎥⎦
y (k ) = [0 0 1]x(k )
x( k + 1) = Ax ( k ) + Bu ( k )
y (k ) = Cx (k ) + Du (k )
where x(k) ∈ Rn, u(k) ∈ Rm, y(k) ∈ Rr, A, B, C and D are constants matrices of
appropriate dimensions.
45
Iteratively,
y (k ) = Cx(k ) + Du (k )
y (k + 1) = Cx(k + 1) + Du (k + 1) = CAx(k ) + CBu (k ) + Du (k + 1)
y (k + 2) = Cx( k + 2) + Du (k + 2) = CA 2 x(k ) + CABu (k ) + CBu (k + 1) + Du (k + 2)
M
y (k + n − 1) = CA n −1 x(k ) + CA n − 2 Bu (k ) + CA n −3 Bu (k + 1) + L + CBu (k + n − 2) + Du (k + n − 1)
In matrix form,
⎡ y (k ) ⎤ ⎡ C ⎤ ⎡ D 0 0 L 0 ⎤ ⎡ u (k ) ⎤
⎢ y (k + 1) ⎥ ⎢ CA ⎥ ⎢ CB D 0 L 0 ⎥⎥ ⎢⎢ u (k + 1) ⎥⎥
⎢ ⎥ ⎢ ⎥ ⎢
y (k ) = ⎢ y (k + 2) ⎥ = ⎢ CA 2 ⎥ x(k ) + ⎢ CAB CB D M ⎥ ⎢ u ( k + 2) ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥
⎢ M ⎥ ⎢ M ⎥ ⎢ M M O O 0 ⎥⎢ M ⎥
⎢⎣ y (k + n − 1)⎥⎦ ⎢⎣CA n −1 ⎥⎦ ⎢⎣CA n − 2 B CA n −3 B L CB D ⎥⎦ ⎢⎣u (k + n − 1)⎥⎦
= ϕx(k ) + Tu (k )
46
If ϕ has full rank, then x(k) = ϕ-# [y(k)-Tu(k)], i.e., if the system is observable then
we can reconstruct the state at time k using input, output measurements up to time
k+n-1, where ϕ-# is the pseudoinverse of ϕ.
Iteratively,
x(1) = Ax(0) + Bu (0)
x(2) = Ax(1) + Bu (1) = A 2 x(0) + ABu (0) + Bu (1)
x(3) = Ax(2) + Bu (2) = A 3 x(0) + A 2 Bu (0) + ABu (1) + Bu (2)
x(4) = Ax(3) + Bu (3) = A 4 x(0) + A 3 Bu (0) + A 2 Bu (1) + ABu (2) + Bu (3)
M
k −1
x(k ) = A x(0) + ∑ A k −l −1 Bu (l )
k
l =0
and
⎡ k −1 k −l −1 ⎤
y (k ) = CA x(0) + C ⎢∑ A
k
Bu (l )⎥ + Du (k )
⎣ l =0 ⎦
47
From the last equation,
x(k + n) − A n x(k ) = QU (k )
where Q is the controllability matrix
Q = [B AB L A n−1 B ]
and
⎡ u (k + n − 1) ⎤
⎢u (k + n − 2)⎥
U (k ) = ⎢ ⎥
⎢ M ⎥
⎢ ⎥
⎣ u ( k ) ⎦
To assure the existence of an input such that the state of the system can reach a
desired state at time k+n given the value of the sate at time k, the following
relationship must be satisfied
where g(·):ℜ⇒ℜ.
Let the nominal operating point be x0 and let N be a neighborhood of it,
i.e., N = {x ∈ ℜ: a < x < b} and a < x0 < b. If the function g(·) is analytic
on N, i.e., it is infinitely differentiable on N, then for Δx∈ℜ such that
x0+Δx∈N, we get the following Taylor series expansion
dg 1 d2g
g ( x0 + Δx) = g ( x0 ) + x = x0 ⋅ Δx + x = x0 ⋅ Δx 2 +L
dx 2! dx 2
For small Δx,
dg dg
y = g ( x0 + Δx) ≅ g ( x0 ) + x = x0 ⋅ Δx = y0 + x = x0 ⋅ Δx
dx dx
49
Therefore, the linear approximation of a nonlinear system y = g(x) near
the operating point x0 has the form
dg
y − y0 = x = x0 ⋅ Δx
dx
or Δy = y − y0 ≅ m Δx
dg
where m= x = x0
dx
0.9
0.8
0.7
0.6
v in Volts
0.5
0.4
0.3
0.2
0.1
0
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
i in Amps 51
If, on the other hand, y=g(x1,x2,…,xn), then if g(·) is analytic on the set
N={x∈ℜn:a<||x||<b}, x0∈N and x0+Δx∈N, x0=[x10,…,xn0]T,
∂g ( x ) ∂g ( x )
y = g ( x10 ,L , xn 0 ) + x = x0 ⋅ Δx1 + L + x = x0 ⋅ Δxn + higher order terms
∂x1 ∂xn
= g ( x0 ) + ∇ g ( x ) x = x0 ⋅ Δx + higher order terms
⎡ ∂g ( x ) ∂g ( x ) ∂g ( x ) ⎤
where ∇g ( x ) ≡ ⎢ L ⎥
⎣ ∂x1 ∂x2 ∂xn ⎦
52
In other words, x&n (t ) = f ( xn (t ), un (t ), t ) , xn (t0 ) = x0 , for t ≥ t0
Assume that we know xn(t). Perturb the state and the input by taking x(t0)
= x0 + Δx0 and u(t) = un(t0) + Δu(t).
We want to find the solution to
f ( xn (t ) + Δx(t ), un (t ) + Δu (t ), t ) = f ( xn (t ), un (t ), t )
⎡ Δx(t ) ⎤
+∇f ( x, u , t ) x = xn ⋅⎢ ⎥ + higher order terms
u = un ⎣ Δu (t ) ⎦
⎡ ∂f ∂f ⎤
where ∇f ( x , u , t ) = ⎢
⎣ ∂x ∂u ⎥⎦
53
∂f ( x(t ), u (t ), t ) ∂f ( x(t ), u (t ), t )
Let a (t ) ≡ x = xn and b(t ) ≡ x = xn
∂x u = un ∂u u = un
Then
dx(t ) dxn (t ) d Δx(t )
= + ≅ f ( xn (t ), un (t ), t ) + a (t )Δx(t ) + b(t )Δu (t )
dt dt dt
d Δx(t )
Therefore, ≅ a (t )Δx(t ) + b(t )Δu (t ) , Δx(t0 ) = Δx0
dt
Example: Let x& (t ) = − x 2 (t ) + u (t ) , x(t0 ) = x0
2
Clearly, f ( x (t ), u (t ), t ) = − x (t ) + u (t )
Now,
∂f ( x(t ), u (t ), t )
a (t ) ≡ x = xn = −2 xn (t )
∂x u = un
54
∂f ( x(t ), u (t ), t )
and b(t ) ≡ x = xn =1
∂u u = un
d Δx(t )
So, ≅ − 2 xn (t )Δx(t ) + Δu (t ) , Δx(t0 ) = Δx0
dt
Let un (t ) = 0, t ≥ 1 and xn (t0 ) = xn (1) = 1, then
dxn (t )
= − xn2 (t ) , xn (1) = 1
dt
dxn (t ) 1 1
− 2 = dt ⇒ + c1 = t + c2 ⇒ c1 = c2 and xn (t ) = , t ≥ 1
xn (t ) xn (t ) t
Finally,
d Δx(t ) 2
≅ − Δx(t ) + Δu (t ) , Δx(1) = Δx0 , t ≥ 1
dt t
55
II. Vector Case. Consider now the case of a system described by the following
nonlinear vector differential equation:
fi ( xn (t ) + Δx (t ), un (t ) + Δu(t ), t ) = fi ( xn (t ), un (t ), t )
⎡ Δx (t ) ⎤
+∇fi ( x, u, t ) x = xn ⋅⎢ ⎥ + higher order terms
u = un ⎣ Δu(t ) ⎦
56
⎡ ∂fi ∂fi ∂fi ⎤
and ∇ x fi ( x , u, t ) = ⎢ L ⎥
⎣ ∂x1 ∂x2 ∂xn ⎦
⎡ ⎤ ⎡ ⎤
⎢∇ x f1 ( x , u, t ) x = xn ⎥ ⎢∇ u f1 ( x, u, t ) x = xn ⎥
⎢ u = un ⎥ ⎢ u = un ⎥
⎡ Δx1 ⎤ ⎢ ⎥ ⎡ Δx1 ⎤ ⎢ ⎥ ⎡ Δu1 ⎤
⎢ Δx ⎥ ⎢∇ f ( x, u, t ) ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥
d ⎢ 2⎥ ⎢ x 2 x = xn ⎥ ⎢ Δx2 ⎥ ⎢∇ u f 2 ( x , u, t ) x = xn ⎥ ⎢ Δ u 2⎥
=⎢ u = un ⎥ +
⎢
dt M ⎥ ⎢ M ⎥ ⎢ u = un ⎥
⎢ M ⎥
⎢ ⎥ ⎢ M ⎥⎢ ⎥ ⎢ M ⎥⎢ ⎥
Δ
⎣ n⎦x ⎢ ⎥ ⎣ Δxn ⎦ ⎢ Δ
⎥⎣ m⎦u
⎢ ⎥ ⎢ ⎥
⎢∇ x f n ( x , u, t ) x = xn ⎥ ⎢∇ u f n ( x, u, t ) x = xn ⎥
⎢⎣ u = un ⎥
⎦ ⎣⎢ ⎥
u = un ⎦
57
Finally, if the outputs of the nonlinear system are of the form
y (t ) = g ( x (t ), u(t ), t ) , y (t ) ∈ ℜ p
Then
⎡ ⎤ ⎡ ⎤
⎢ ∇ x g1 ( x , u, t ) x = xn ⎥ ⎢ ∇ u g1 ( x, u, t ) x = xn ⎥
⎢ u = un ⎥ ⎢ u = un ⎥
⎡ Δy1 ⎤ ⎢ ⎥ ⎡ Δx1 ⎤ ⎢ ⎥ ⎡ Δu1 ⎤
⎢ Δy ⎥ ⎢ ⎥⎢ ⎥ ⎢ ∇ g ( x, u, t ) ⎥⎢ ⎥
∇
⎥=⎢ x 2 g ( x , u, t ) x = xn ⎥ ⎢ Δ x2⎥ ⎢ u 2 x = xn ⎥ ⎢ Δ u
Δy (t ) = ⎢ 2⎥
2
+
⎢ M ⎥ ⎢ u = un ⎥
⎢ M ⎥ ⎢ u = un ⎥
⎢ M ⎥
⎢ ⎥ ⎢ M ⎥⎢ ⎥ ⎢ M ⎥⎢ ⎥
Δy
⎢⎣ p ⎥⎦ ⎢ Δ
⎥⎣ n⎦ ⎢
x Δ
⎥⎣ m⎦u
⎢ ⎥ ⎢ ⎥
⎢∇ x g p ( x, u, t ) x = xn ⎥ ⎢∇ u g p ( x, u, t ) x = xn ⎥
⎢⎣ u = un ⎥
⎦ ⎢⎣ ⎥
u = un ⎦
58
Example: Suppose we have a point mass in an inverse square law force field,
e.g., a gravity field as shown below
orbit
θ(t)
r(t)
m
u1(t)
u2(t)
d 2θ (t ) 2 ⎡ dθ (t ) ⎤ ⎡ dr (t ) ⎤ 1
=− ⎢ ⎥ ⎢ ⎥ + u2 (t )
dt 2 r (t ) ⎣ dt ⎦ ⎣ dt ⎦ r (t )
Select the states as follows:
dr (t ) dθ (t )
x1 (t ) = r (t ) , x2 (t ) = , x3 (t ) = θ (t ) , and x4 (t ) =
dt dt
Then, x&1 (t ) = x2 (t )
K
x&2 (t ) = x1 (t ) x42 (t ) − + u1 (t )
x12 (t )
60
x&3 (t ) = x4 (t )
x2 (t ) x4 (t ) 1
x&4 (t ) = −2 + u2 (t )
x1 (t ) x1 (t )
Which implies that
⎡ x2 ⎤
⎢ K ⎥
⎢ x1 x4 −
2
+ u1 ⎥
⎢ x12 ⎥
f ( x (t ), u(t ), t ) = ⎢ ⎥.
⎢ x4 ⎥
⎢ x2 x4 u2 ⎥
⎢ −2 + ⎥
⎣⎢ x1 x1 ⎦⎥
xn (t ) = ⎡⎣R 0 ω0 t ω0 ⎤⎦ T , t ≥ 0
61
dθ n (t ) K
where rn(t) = R and = ω0 = 3
.
dt R
Linearizing about xn(t) and un(t), yields
∇ x f1 ( x, u, t ) x = xn = [ 0 1 0 0]
u = un
⎡ 2 2K ⎤ x = xn
∇ x f 2 ( x , u, t ) = ⎢ x4 + 3 0 0 2 x1 x4 ⎥ = ⎡3ω02 0 0 2 Rω0 ⎤
x = xn ⎣ ⎦
u = un ⎣⎢ x1 ⎦⎥ u = un
∇ x f3 ( x, u, t ) x = xn = [ 0 0 0 1]
u = un
⎡ 2 x2 x4 − u2 x x2 ⎤ x = xn ⎡ ω ⎤
∇ x f 4 ( x , u, t ) x = xn =⎢ −2 4 0 −2 ⎥ = ⎢ 0 −2 0 0 0⎥
u = un ⎣⎢ x12 x1 x1 ⎦⎥ u = un ⎣ R ⎦
62
Likewise, ∇ u f1 ( x , u, t ) x = xn = [ 0 0]
u = un
∇ u f 2 ( x , u, t ) x = xn = [1 0]
u = un
∇ u f3 ( x, u, t ) x = xn = [ 0 0]
u = un
⎡ 1⎤ ⎡ 1⎤
∇ u f 4 ( x, u, t ) = ⎢0 ⎥ = ⎢0
R ⎥⎦
x = xn x = xn
u = un ⎣ x1 ⎦ u = un ⎣
In state form,
⎡ 0 1 0 0 ⎤ ⎡0 0⎤
⎢ 2 ⎥ ⎢1
⎢3ω0 0 0 2 Rω0 ⎥ ⎢ 0 ⎥⎥
Δx = ⎢ 0 0 0 1 ⎥ Δx + ⎢ 0 0 ⎥ Δu
⎢ ⎥ ⎢ ⎥
⎢ ω0 ⎥ ⎢0 1⎥
⎢⎣ 0 −2 0 0 ⎥ ⎢⎣
R ⎦ R ⎥⎦
63
Existence of solution of differential equations
Consider the following unforced, possibly nonlinear dynamic system described by
x& (t ) = f (t , x (t )) ⊗
where x(t0) = x0, x(t) ∈ Rn and f(•,•):RxRn → Rn.
Then the state trajectory φ(• ; t0, x0) is a solution to ⊗ over the time interval [a,b] if
and only if φ(t0; t0, x0) = x0 and φ&(t ; t0 , x0 ) = f (t , φ (t ; t0 , x0 )) for all t ∈ [a,b].
Def. Let D ⊂ RxRn be a connected, closed, bounded set. Then the function f(t,x)
satisfies a local Lipschitz condition at t0 on D with respect to (t0, x) ∈ D if there
exists a finite constant k ∋ f (t0 , x1 ) − f (t0 , x2 ) 2
≤ k x1 − x2 2 ∀ (t0,x1), (t0,x2) ∈ D,
where k is the Lipschitz constant.
64
Global Existence and Uniqueness:
Assumptions:
1. S ⊂ R+ ≡ [0, ∞) contains at most a finite number of points per unit interval.
2. For each x ∈ Rn, f(t, x) is continuous at t ∉ S.
3. For each ti ∈ S, f(t, x) has finite left and right hand limits at t = ti.
4. f(•,•) : R+xRn → Rn satisfies the global Lipschitz condition, i.e., there exists a
piecewise continuous function k(•):R+→R+ such that
f (t , x1 ) − f (t , x2 ) 2 ≤ k (t ) x1 − x2 2
Theorem: Suppose that assumptions (1) – (4) hold. Then for each t0 ∈ R+ and x0
∈ Rn there exists a unique continuous function φ(•; t0, x0) : R+ → Rn such that
(a) φ&(t ; t0 , x0 ) = f (t ,φ (t ; t0 , x0 )) and (b) φ( t0; t0, x0) = x0, ∀ t ∈ R+ and t ∉ S.
By uniqueness we mean that if φ1 and φ2 satisfy conditions (a) and (b) then
φ1(t; t0, x0) = φ2(t; t0, x0) ∀ t ∈ R+.
65
Consider now the unforced, linear, time-varying system
where A(t) ∈ Rnxn and its components aij(t) are piecewise continuous.
Theorem: If A(•) is piecewise continuous, then for each initial condition x(0), a
solution φ(•; 0, x0) to the equation exists and is unique.
A(t ) x1 − A(t ) x 2 2
= A(t )( x1 − x 2 ) 2 ≤ A(t ) ∞,D j
x1 − x 2 2
66
for all Dj, j = 1, 2, …
Let k (t ) = A(t ) ∞ , D , ∀ t ∈ R+, then A(t ) x1 − A(t ) x2 2 ≤ k (t ) x1 − x2 2, where k(t) is a
j
piecewise continuous function for t ∈ R+. Therefore, for each x(0) ∈ Rn, a unique
solution φ(•; 0, x0) to exists.
⎡− 2t 1⎤
&x(t ) = ⎢ ⎥ x(t )
⎣ −1 − t⎦
First of all, all the entries of A(t) are continuous functions of time.
But, A(t ) ∞
= 1 + 2t = k (t ) ⇒ A(t ) x1 − A(t ) x 2 2
≤ (1 + 2t ) x1 − x2 2
which implies that there exists a unique solution since k(t) = 1 + 2t is continuous ∀ t
∈ R +.
Consider now the linear time-varying unforced dynamic system described by
⊗ 67
Theorem: Let A(t) ∈ Rnxn be piecewise continuous. Then the set of all solutions of
Proof: Let { η1, η2, …, ηn} be a set of linearly independent vectors in Rn, i.e.,
~
α 1η1 + α 2η 2 + L + α nη n = 0 , if and only if αi = 0, i = 1, 2, …, n; and φi(•) be the
solutions of ⊗ with initial conditions φi(t0) = ηi, i = 1, 2, …, n.
∑ α φ (t
i =1
But, is a solution of ⊗ with initial condition i i 0 ) =η
i =1
This is because
d ⎛ n ⎞ n n n
⎜ ∑ α i φ i (t ) ⎟ = ∑ α i φ&i (t ) = ∑ α i A(t )φ i (t ) = A(t )∑ α i φ i (t )
dt ⎝ i =1 ⎠ i =1 i =1 i =1
⎡0 0 ⎤
&x(t ) = ⎢ ⎥ x(t )
⎣ t 0 ⎦
⎡1⎤ ⎡0 ⎤
Let the vectors η1 and η2 be described by η1 = ⎢ ⎥ and η 2 = ⎢ ⎥ .
⎣0 ⎦ ⎣1 ⎦
⎡ 1 ⎤ ⎡0 ⎤
Then, φ1 (t ) = ⎢ t 2 ⎥ and φ 2 (t ) = ⎢ ⎥ are two independent solutions to the system
1
⎢⎣ 2 ⎥⎦ ⎣1⎦
with initial conditions φ1(0) = η1 and φ2(0) = η2.
∀ αi ∈ R, i = 1, 2.
70
Def. The state transition matrix of the differential eq. is given by
2. Φ(t, t0) satisfies the differential eq. M& (t ) = A(t ) M (t ) , M(t0) = I, M(t) ∈ Rnxn.
Proof: Since each φi is uniquely determined by A(t) for each initial condition ηi
then Φ(t, t0) is also uniquely determined by A(t).
72
If ∀ t and t0, A(t) has the following commutative property:
⎛t ⎞ ⎛t ⎞
⎜ ⎟ ⎜
A(t ) ∫ A(τ )dτ = ∫ A(τ )dτ ⎟ A(t )
⎜t ⎟ ⎜t ⎟
⎝0 ⎠ ⎝0 ⎠
Then, t
∫ A(τ ) dτ
Φ (t , t0 ) = et0
Example: Compute the state transition matrix Φ(t, t0) for the differential equation,
⎡− 1 e 2t ⎤
x& (t ) = ⎢ ⎥ x(t )
⎣ 0 − 1⎦
We can show that
t t ⎡
t −t
A(t ) ∫ A(τ )dτ = ∫ A(τ )dτ A(t ) = ⎢ 0
⎛1
( ) ⎞⎤
− ⎜ e 2t − e 2t0 + (t − t0 )e 2t ⎟⎥
⎢ ⎝2 ⎠⎥
t0 t0
⎣ 0 t − t0 ⎦
73
Hence,
t
2 3
∫ A(τ ) dτ t
1⎛
t ⎞ 1⎛t ⎞
Φ (t , t0 ) = et0 = I + ∫ A(τ )dτ + ⎜ ∫ A(τ )dτ ⎟ + ⎜ ∫ A(τ )dτ ⎟ + L
t0
2! ⎜⎝ t0 ⎟ 3! ⎜ t
⎠ ⎝0
⎟
⎠
⎡ (t − t0 )2 ⎤
⎡1 0⎤ ⎡− (t − t0 ) 1 e 2t − e 2t0 ( )
⎤ ⎢
1
(
− (t − t0 ) e 2t − e 2t0 )
⎥
=⎢ ⎥ +⎢ 2 ⎥ + ⎢ 2! 2! ⎥+
⎣0 1⎦ ⎢⎣ 0 − (t − t0 ) ⎥ ⎢
⎦ 0
(t − t0 )2 ⎥
⎢⎣ 2! ⎥⎦
⎡ − (t − t0 )3 3 (t − t0 ) 2t 2t0
2
⎤
⎢
3! 2 3!
e −e ( ) ⎥
⎢ ⎥ +L
− (t − t0 )
3
⎢ 0 ⎥
⎢⎣ 3! ⎥⎦
and
1 1
Φ11 (t , t0 ) = 1 − (t − t0 ) + ( t − t 0 ) 2 − ( t − t 0 ) 3 + L = e − ( t −t 0 )
2! 3!
74
Φ12 (t , t0 ) = (
1 2t
2
1
) ( )
e − e 2t0 − e 2t − e 2t0 (t − t0 ) +
2!
3 1 2t
2 3!
( )
e − e 2t0 (t − t0 ) 2 − L
Finally,
Φ 22 (t , t0 ) = Φ11 (t , t0 ) = e − (t −t0 )
⎡ −(t −t0 )
Φ (t , t0 ) = ⎢
e
2
(
1 t + t0
e − e −t +3t0 )⎤⎥
⎢ 0 e −(t −t0 ) ⎥
⎣ ⎦
We can see that the norm of the state transition matrix blows up as time t goes to
infinity, therefore, the system is unstable. 75
t
1. A(•) is constant
Proof:
t t
∫ τ = ∫ τ = −
2 2
Ad A Id A (t t 0 ) A
t0 t0
76
But,
t t t t
∫ τ τ = ∫ α τ τα = ∫ α τ τ α = α ∫ α τ τ 2
A( ) d A(t ) ( ) Md (t ) M I ( ) d M (t ) M (t ) ( ) d M
t0 t0 t0 t0
k
(3) If A(t ) = ∑α (t ) M
i =1
i i then
k k t k k t
= ∑ ∑ α i (t ) M i ∫ α j (τ )dτM j =∑ ∑ α i (t ) ∫ α j (τ )dτ M i M j
i =1 j =1 i =1 j =1
But, t0 t0
t ⎛t k ⎞⎛ k ⎞ ⎛ k t ⎞⎛ k ⎞
⎜ t∫ ∑ ∑ ∑ ∑
⎜ ⎟ ⎜ ⎟
∫t A(τ ) dτ A(t ) = α j (τ ) M j dτ
⎟
⎜ α i (τ ) M i ⎟ =
⎜ ∫ α j (τ ) d τ M j
⎟
⎜ α i (τ ) M i ⎟
0 ⎝ 0 j =1 ⎠⎝ i =1 ⎠ ⎝ j =1 t0 ⎠⎝ i =1 ⎠
k k t k k t
= ∑∑ ∫ α j (τ )dτ M jα i (t ) M i = ∑∑ α i (t ) ∫ α j (τ )dτ M j M i
j =1 i =1 t0 j =1 i =1 t0
77
and, if MiMj = MjMi ∀ i, j (i ≠ j), then
t t
A(t ) ∫ A(τ )dτ = ∫ A(τ ) dτ A(t )
t0 t0
k ∫
M i α i (τ ) dτ
Φ (t , t0 ) = ∏ e t0
i =1
t t
∫
Proof: Since A(t ) A(τ )dτ =
t0
∫ A(τ )dτ A(t ) when A(t) satisfies condition (3), then
t0
∫ A(τ ) dτ
Φ (t , t0 ) = e t0
But, t t
⎛ k ⎞ k
k ∫ ∑⎜
⎜
α i (τ ) M i ⎟ dτ
⎟ ∑ ∫ αi (τ ) dτM i
A(t ) = ∑α i (t ) M i ⇒ Φ(t , t0 ) = e t0 ⎝ ⎠
=e
i =1 i =1 t0
i =1
78
or, t t t t
Φ(t , t 0 ) = e t0
⋅e t0
Le t0
= ∏e t0
i =1
Def. Any nxn matrix M(t) satisfying the matrix differential equation M& (t ) = A(t ) M (t ) ,
M(t0) = M0, where det(M0) ≠0 is a fundamental matrix of solutions.
79
Def. Let M(t) be any fundamental matrix of . Then, ∀ t ∈ R+, the state
transition matrix of is given by, Φ(t,t0) = M(t)M-1(t0).
Theorem (Semigroup Property): For all t1, t0 and t, we have Φ(t,t0) = Φ(t,t1) Φ(t1,t0).
∫ tr ( A(τ ) )dτ
Theorem (Liouville formula): det[Φ (t , t 0 )] = e t0
d ⎡ ⎤
t t
∂
Now, ⎢Φ (t , t0 ) x0 + ∫ Φ (t ,τ ) B(τ )u (τ )dτ ⎥ = Φ
& (t , t ) x +
0 0 ∫ Φ (t ,τ ) B(τ )u (τ )dτ
dt ⎢⎣ t0 ⎥⎦ ∂t t0
t
∂
= A(t )Φ (t , t0 ) x0 + Φ (t , t ) B (t )u (t ) + ∫ Φ (t ,τ ) B (τ )u (τ )dτ
t0
∂t
t t
since ∂ f (t ,τ )dτ = f (t ,τ ) + ∂ f (t ,τ )dτ
∂t t∫0 τ =t ∫t ∂t
0
82
Hence,
t
d
[•] = A(t )Φ(t , t0 ) x0 + B(t )u (t ) + ∫ A(t )Φ(t ,τ ) B(τ )u (τ )dτ
dt t0
⎡ t ⎤
= A(t ) ⎢Φ(t , t0 ) x0 + ∫ Φ(t ,τ ) B(τ )u (τ )dτ ⎥ + B(t )u (t ) = A(t ) x(t ) + B(t )u (t )
⎢⎣ t0 ⎥⎦
The complete response should be given by
t
y (t ) = C (t )Φ(t , t0 ) x0 + C (t ) ∫ Φ(t ,τ ) B (τ )u (τ )dτ + D(t )u (t )
t0
Let us now consider the time-invariant case, i.e., A(t) = A, B(t) = B, C(t) = C and
D(t) = D, where A, B, C and D are constant matrices.
Φ (t , t 0 ) = Φ (t − t 0 ,0) = e A( t −t0 )
83
t
Proof: Since A and ∫ Adτ
t0
commute, we have that
∫ Adτ
Φ (t , t 0 ) = e t0 = exp[ A(t − t 0 )] = Φ (t − t 0 ,0) = Φ (t − t 0 )
t0 t0
t
y (t ) = Ce A(t −t0 ) x0 + Ce At ∫ e − Aτ Bu (τ )dτ + Du (t )
t0
This follows from the fact that Φ(t,t0) = M(t)M-1(t0) since we can always let
84
Def. If A is an nxn matrix, λ ∈ C, e ∈ Cn and the equation
Ae = λe, e ≠0
is satisfied, then λ is called an eigenvalue of A and e is called an eigenvector of A
associated with λ. Also, the eigenvalues of A are the roots of its characteristics
polynomial, i.e.,
ρ ( A) = max{λi : λi ∈ A}
The right eigenvector ei of A associated with the eigenvalue λi satisfies the
equation Aei = λiei, whereas the left eigenvector wi ∈ Cn of A associated with λi
satisfies the equation wi*A = λiwi*, where (•)* designates the complex conjugate
transpose of a vector. If λ ∈ σ(A) and λ is complex then λ*∈ σ(A). The eigenvectors
associated with λ and λ* will be e and e*, respectively.
85
Example: Find the right eigenvectors of the matrix
⎡− 2 − 5 − 5⎤
A = ⎢⎢ 1 − 1 0 ⎥⎥
⎢⎣ 0 1 0 ⎥⎦
The characteristic polynomial of A is given by
π A (λ ) = det(λI − A) = λ3 + 3λ2 + 7λ + 5 = (λ + 1)(λ2 + 2λ + 5)
Therefore, its spectrum is described by σ(A) = {-1, -1 - j2, -1 + j2}
Now,
~
Ae~i = λi e~i ⇒ (λi I − A)e~i = 0
or
⎡λi + 2 5 5⎤
⎢ − 1 λ + 1 0 ⎥ e~ = ~
⎢ i ⎥ i 0
⎢⎣ 0 − 1 λi ⎥⎦
For λ2 = -1 – j2,
⎡1 − j 2 5 5 ⎤
⎢ − 1 − j2 0 ⎥ e~ = ~
⎢ ⎥ 2 0
⎢⎣ 0 − 1 − 1 − j 2⎥⎦
87
or
(1 − j 2)e21 + 5e22 + 5e23 = 0
− e21 − j 2e22 = 0 ⇒ e21 = − j 2e22
− e22 − (1 + j 2)e23 = 0 ⇒ e22 = −(1 + j 2)e23
and e~2 = [− 4 + j 2 − 1 − j 2 1]
T
Proof: If A has n linearly independent eigenvectors e1, e2, … , en, form the
nonsingular matrix T = [e1 e2 … en].
88
Now, T-1AT = T-1[Ae1 Ae2 … Aen] = T-1[λ1e1 λ2e2 … λnen]
= T-1[e1 e2 … en]D = T-1TD = D
where D = diag [λ1, λ2, … , λn], and λi, i = 1, 2, … , n are the eigenvalues of A.
Conversely, suppose there exists a matrix T such that T-1AT = D is diagonal. Then
AT = TD.
Let T = [t1 t2 … tn], then
AT = [At1 At2 … Atn] = [t1d11 t2d22 … tndnn] = TD ⇒ Ati = diiti, which implies
that the ith column of T is an eigenvector of A associated with the eigenvalue dii.
Since T is nonsingular, there are n linearly independent eigenvectors.
Now, if A is diagonalizable, then eAt = TeDtT-1 because
e At = eTDT
−1
t
= I + TDT −1t +
1
(
2!
)2 1
(
3!
3
)
TDT −1 t 2 + TDT −1 t 3 + L
= I + TDT −1t +
1
2!
( )( )
TDT −1 TDT −1 t 2 +
1
3!
( )( )( )
TDT −1 TDT −1 TDT −1 t 3 +L
1 1
= I + TDT −1t + TD 2T −1t 2 + TD 3T −1t 3 + L
2! 3!
⎡ 1 1 ⎤
= T ⎢ I + Dt + D 2 t 2 + D 3t 3 + L⎥ T −1 = Te Dt T −1
⎣ 2! 3! ⎦ 89
⎡− 2 − 5 − 5⎤
Example: For the given A = ⎢ 1 − 1 0 ⎥ matrix, compute eAt.
⎢ ⎥
⎢⎣ 0 1 0 ⎥⎦
We already know that σ(A) = {-1, -1 - j2, -1 + j2}. Now,
⎡ 0 − 4 + j 2 − 4 − j 2⎤
T = ⎢⎢− 1 − 1 − j 2 − 1 + j 2 ⎥⎥
⎢⎣ 1 1 1 ⎥⎦
The inverse of T is
⎡2 2 10 ⎤
1⎢
T −1 = ⎢− 1 − 1 + j 2 − 1 − j 2⎥⎥
8
⎢⎣− 1 1 + j 2 − 1 − j 2⎥⎦
90
Therefore,
⎡− 1 0 0 ⎤
D = ⎢⎢ 0 − 1 − j 2 0 ⎥⎥
⎢⎣ 0 0 − 1 + j 2⎥⎦
⎡e − t 0 0 ⎤
⎢ ⎥
e Dt =⎢0 e −(1+ j 2 )t 0 ⎥
⎢0 0 e −(1− j 2) t ⎥⎦
⎣
91
Proposition: Suppose D is a block diagonal matrix with square blocks Di, i = 1, 2, …, n, i.e.,
⎡ D1 0 L 0⎤
⎢0 D2 M ⎥⎥
D=⎢
⎢M O M ⎥
⎢ ⎥
⎣0 0 L Dn ⎦
Then,
⎡e D1t 0 L 0 ⎤
⎢ ⎥
⎢ 0 e D2t M ⎥
e Dt
=
⎢ M O M ⎥
⎢ Dn t ⎥
⎢⎣ 0 0 L e ⎥⎦
Example: For the same A matrix of the previous example, compute eAt.
Let ⎡ 0 −4 2 ⎤
T = [e1 Re{e2 } Im{e2 }] = ⎢⎢− 1 − 1 − 2⎥⎥
⎢⎣ 1 1 0 ⎥⎦
92
then,
⎡1 1 5⎤
1
T −1 = ⎢⎢− 1 − 1 − 1⎥⎥
4
⎢⎣ 0 − 2 − 2⎥⎦
⎡− 1 0 0⎤
and ⎡D 0⎤
D = T −1 AT = ⎢⎢ 0 − 1 − 2⎥⎥ = ⎢ 1
⎣ 0 D2 ⎥⎦
⎢⎣ 0 2 − 1⎦ ⎥
But,
⎡cos 2t − sin 2t ⎤
e D1t = e −t and e D2t = e −t ⎢ ⎥
⎣ sin 2t cos 2t ⎦
⎡1 0 0 ⎤
e At = T (e Dt ) T −1 = e −t T ⎢⎢0 cos 2t sin 2t ⎥⎥ T −1
Finally,
Hence,
∞
y (t ) = ∫ H (t ,τ )u(τ )dτ
−∞
Let u(t) = δ(t - τ)ηi, ηi =[0 … 0 1 0 … 0], where the 1 appears at the ith location, then for t ≥ τ
t
y (t ) = C (t ) ∫ Φ (t , q ) B ( q )δ ( q − τ )ηi dq + D(t )δ (t − τ )ηi = C (t )Φ (t ,τ ) Bi (τ ) + Di (t )δ (t − τ )
−∞
Now, ∞
y (t ) = ∫ H (t , q)u(q)dq
−∞
95
If the system is causal, then H(t, τ) = 0 for τ > t. Thus, for t ≥τ
t
y (t ) = ∫ H (t , q)u(q)dq
−∞
⎧hi (t ,τ ) t ≥ τ
y (t ) = ⎨
⎩ 0 t <τ
Hence, H(t, τ) = C(t)Φ(t, τ)B(τ) + D(t)δ(t - τ), t≥τ
=0 t<τ
If A(•), B(•), C(•) and D(•) are constant matrices, then
⎧Ce A(t −τ ) B + Dδ (t − τ ) t ≥ τ
H (t ,τ ) = H (t − τ ,0) ≡ H (t − τ ) = ⎨
⎩ 0 t <τ
Consider again the time-invariant system state model
x& (t ) = Ax (t ) + Bu(t ) , x ( 0) = x 0
y (t ) = Cx (t ) + Du(t ) 96
In the s-domain,
X ( s ) = ( sI − A) −1 x 0 + ( sI − A) −1 BU ( s )
[ ]
Y ( s) = C ( sI − A) −1 x 0 + C ( sI − A) −1 B + D U ( s )
If the system is initially at rest, i.e., x0 = 0,
[ ]
Y ( s ) = C ( sI − A) −1 B + D U ( s )
Def. The transfer function matrix of the time-invariant state model is given by
H ( s ) = C ( sI − A) −1 B + D
Def. (The Leverrier Algorithm) Let the polynomial
N1 = I, a1 = - tr(A)
N2 = N1A+a1I, a2 = (-1/2) tr(N2A),
97
N3 = N2A+a2I, a3 = (-1/3) tr(N3A),
.
.
.
Ni = Ni-1A+ai-1I, ai = (-1/i) tr(NiA),
and 0 = NnA + anI, where tr(M) is the trace of the matrix M.
⎡− 2 0 1⎤
A = ⎢⎢ 1 − 2 0 ⎥⎥
⎢⎣ 1 1 − 1⎥⎦
⎡3 0 1 ⎤
Then, N1 = I, a1 = -tr(A) = -(-5) = 5, N2 = A + a1I = ⎢1 3 0⎥
⎢ ⎥
⎢⎣1 1 4⎥⎦
⎧⎡− 5 1 2 ⎤⎫ ⎡ 2 1 2⎤
1 1 ⎪ ⎪ ⎢ ⎥
a 2 = − tr ( N 2 A) = − tr ⎨⎢⎢ 1 − 6 1 ⎥⎥ ⎬ = 7, ⇒ N 3 = N 2 A + a2 I = ⎢1 1 1⎥
2 2 ⎪
⎩⎢⎣ 3 2 − 3⎥⎦ ⎪⎭ ⎢⎣3 2 4⎥⎦
98
and ⎧⎡− 1 0 0 ⎤ ⎫
1 1 ⎪ ⎪
a3 = − tr ( N 3 A) = − tr ⎨⎢⎢ 0 − 1 0 ⎥⎥ ⎬ = 1
3 3 ⎪ ⎪
⎩⎢⎣ 0 0 − 1⎥⎦ ⎭
We can show that N 3 A + a3 I = 0
Finally, the characteristic polynomial of A is πA(λ) = λ3 + 5λ2 + 7λ + 1.
n −1 n−2
Let ( sI − A) ≡−1R ( s ) N s + N s + L + N n−1s + N n
= 1 2
π A ( s) s n + a1s n−1 + L + an−1s + an
& (t ) = AΦ(t )
Φ
99
sL{Φ(t)} - Φ(0) = AL{Φ(t)}. But, Φ(0) = I, thus L{Φ(t)} = (sI – A)-1.
Clearly, Φ(t) = L-1{(sI – A)-1}.
Cayley-Hamilton Theorem:
i =1
n
Likewise, Aπ A ( A) = 0 ⇒ A n +1
= −∑ ai An−i +1
i =1
n
or A n+1
= −a1 A − ∑ ai An−i +1
n
i =2
⎡ n n −i ⎤
n n
= − a1 ⎢− ∑ ai A ⎥ − ∑ ai A n −i +1
= a1 A + ∑ [a1 I − A]ai A n −i
2 n −1
⎣ i =1 ⎦ i=2 i=2
100
Observation 1: An+1 is also a linear combination of I, A, A2, … , An-1.
⎡0 1⎤
Example: Calculate f(A) = A10 + 3A with A = ⎢ ⎥
⎣ − 1 − 2 ⎦
101
The characteristic polynomial of A is πA(λ) = λ2 + 2λ + 1 = (λ+1)2 ⇒ λ1 = λ2 = -1.
Let h(λ) = β0 + β1λ. Then with f(λ) = λ10 + 3λ, we get
f(λ1) = f(-1) = (-1)10 + 3(-1) = -2 = β0 - β1.
But, f(λ2) = f(λ1) ! For the repeated eigenvalue case, the solution procedure is modified as
follows:
m m
Let π A (λ ) = ∏ (λ − λi ) ∋ n = ∑ ni
ni
i =1 i =1
f (1)
(λ ) = 10λ9 + 3 = 10( −1) 9 + 3 = −7 = h (1) (λ ) = β1
λ = λ1 λ = −1 λ =λ1
102
We must solve the equations β0 - β1 = -2 and β1 = -7. This implies that β0 = -9.
Moreover,
⎡− 9 − 7 ⎤
f(A) = A10 +3A = h(A) = β0I + β1A = -9I – 7A = ⎢
⎣7 5 ⎥⎦
Suppose again the eigenvalues of A are distinct, then
R( s) R1 R2 Rn
( sI − A) −1 = = + +L+
π A ( s) s − λ1 s − λ2 s − λn
where
n
π A ( s ) = s + a1s
n n −1
+ L + an−1s + an = ∏ ( s − λi )
i =1
and
⎡ ( s − λi ) R( s) ⎤
Ri = lim ⎢ ⎥
s→λi ⎣ π A ( s ) ⎦
Consequently, for t ≥ 0
⎧ Ri ⎫ n
{ }
n
−1
Φ (t ) = L ( sI − A) −1
= ∑L ⎨ −1
⎬ = ∑ Ri e i = e
λt At
i =1 ⎩ s − λi ⎭ i =1
103
Example: Consider the system x& (t ) = Ax (t ). Obtain Φ(t) using the partial fraction
expansion method with the system matrix A given by
⎡0 1⎤
A=⎢ ⎥
⎣ − 2 − 3⎦
Now, ⎡s −1 ⎤ ⎡ s + 3 1⎤
sI − A = ⎢ ⎥ ⇒ R( s) = ⎢ ⎥
⎣ 2 s + 3⎦ ⎣ − 2 s ⎦
π A ( s) = det( sI − A) = s 2 + 3s + 2 = ( s + 1)( s + 2) = ( s − λ1 )( s − λ2 )
The residue matrices are found as follow:
⎡ R( s) ⎤ ⎡ 1 ⎛ s + 3 1 ⎞⎤ ⎡ 2 1⎤
R1 = lim ⎢( s + 1)
π ( s ) ⎥ = lim ⎢ s + 2 ⎜⎜ − 2 s ⎟⎟⎥ = ⎢− 2 − 1⎥
s → −1 ⎣ A ⎦ s→ −1 ⎣ ⎝ ⎠⎦ ⎣ ⎦
⎡ R(s) ⎤ ⎡ 1 ⎛ s + 3 1 ⎞⎤ ⎡− 1 − 1⎤
R2 = lim ⎢( s + 2) ⎥ = lim ⎢ ⎜⎜ ⎟⎟⎥ = ⎢
s→ −2 ⎣ π A ( s ) ⎦ s→ −2 ⎣ s + 1 ⎝ − 2 s ⎠⎦ ⎣ 2 2 ⎥⎦
104
The inverse of sI-A is equal to
⎡2 1 ⎤ ⎡− 1 − 1⎤
⎢− 2 − 1⎥ ⎢ 2 2 ⎥
( sI − A) −1 =⎣ ⎦+⎣ ⎦
s +1 s+2
⎡2 1 ⎤ −t ⎡− 1 − 1⎤ − 2t
Φ (t ) = ⎢ ⎥ e +⎢ ⎥ e
⎣− 2 − 1⎦ ⎣2 2⎦
or ⎡ 2e −t − e −2t e −t − e −2t ⎤
Φ(t ) = ⎢ −t −2t ⎥
⎣− 2e + 2e − e −t + 2e −2t ⎦
Suppose now that the matrix A contains repeated eigenvalues, then the adjoint
matrix R(s) and πA(s) have common factors. Consider, for example, the matrix
⎡λ1 1 0 ⎤
A = ⎢⎢ 0 λ1 0 ⎥⎥
⎢⎣ 0 0 λ1 ⎥⎦ 105
Then, ⎡( s − λ1 ) 2 s − λ1 0 ⎤ ⎡ s − λ1 1 0 ⎤
⎢ ⎥
⎢ 0 ( s − λ1 ) 2 0 ⎥
⎢ 0
⎢ s − λ1 0 ⎥⎥
R( s) ⎢ 0 0 ( s − λ1 ) 2 ⎥⎦ ⎢⎣ 0 0 s − λ1 ⎥⎦
( sI − A) −1 = = ⎣ =
π A ( s) ( s − λ1 ) 3 ( s − λ1 ) 2
We know from the Cayley-Hamilton theorem that if πA(λ) is the characteristic
polynomial of the nxn matrix A, then πA(A) = 0.
Def. The monic polynomial of least degree which annihilates the matrix A is called
the minimal polynomial of A and is denoted by ψA(λ).
Theorem: For every nxn matrix A, the minimal polynomial ψA(λ) divides the
characteristic polynomial πA(λ). Moreover, ψA(λ) = 0 if and only if λ is an
106
eigenvalue of A, so that every root of πA(λ) = 0 is a root of ψA(λ) = 0.
Proof: If πA(λ) annihilates A and if ψA(λ) is a monic polynomial of minimum
degree that annihilates A, then deg (ψA(λ)) ≤ deg(πA(λ)).
By the Euclidean algorithm there exists polynomials h(λ) and r(λ) such that
πA(λ) = ψA(λ)h(λ)+ r(λ) and deg(r(λ)) < deg(ψA(λ))
But, 0 = πA(A) = ψA(A)h(A) + r(A) = 0·h(A) + r(A) ⇒ r(A) = 0.
However, deg(r(λ)) < deg(ψA(λ)), and by definition ψA(λ) is the polynomial of
minimum degree such that ψA(A) = 0 ⇒ r(λ) ≡ 0 ⇒ ψA(λ) divides πA(λ).
This result implies that every root of ψA(λ) = 0 is a root of πA(λ) = 0 and hence
every root of ψA(λ) = 0 is an eigenvalue of A.
If λ ∈ σ (A) and if x ≠ 0 is its corresponding eigenvector, then Ax = λ x and
0 = ψA(A) x = ψA(λ) x ⇒ ψA(λ) = 0.
If A has repeated eigenvalues λ1, λ2, …, λσ, λi ≠λj, i≠j, i, j = 1, 2, …, σ, then for
m1+m2+…+mσ ≤ n the minimal polynomial ψA(λ) has the structure
ψ A (λ ) = (λ − λ1 ) m (λ − λ2 ) m L (λ − λσ ) mσ
1 2
107
Theorem: Let A ∈Rnxn, then
R( s) Rˆ ( s ) σ mi
Ri j
−1
( sI − A) = = = ∑∑
π A ( s ) ψ A ( s ) i=1 j =1 ( s − λi ) j
where 1 ⎡ d mi − j −1 ⎤
⎢ mi − j ( s − λi ) ( sI − A) ⎥
(mi − j )! lim
Ri =
j mi
s→λi ⎣ ds ⎦
In this particular case, for t ≥ 0 the state transition matrix is given by
σ mi
t j −1 λit
Φ (t ) = ∑∑ Ri j
e
i =1 j =1 ( j − 1)!
Φ (t ) = L−1{( sI − A) −1 }
σ mi
⎧ 1 ⎫ σ mi j t j −1 λit
= ∑∑ Ri L ⎨ j −1
j ⎬
= ∑∑ Ri e
i =1 j =1 ⎩ ( s − λi ) ⎭ i =1 j =1 ( j − 1)!
108
Example: Consider a dynamic system with the A matrix
⎡0 1 0⎤
A = ⎢⎢ 0 0 1 ⎥⎥
⎢⎣− 2 − 5 − 4⎥⎦
Then π A ( s ) = det( sI − A) = ( s + 1) 2 ( s + 2) = s 3 + 4 s 2 + 5s + 2
⇒ a1 = 4, a 2 = 5, a3 = 2
⎡5 4 1⎤
⎢ ⎥
N3 = N2A + a2I = ⎢− 2 0 0⎥
⎢⎣ 0 − 2 0⎥⎦
109
So, R(s) = N1s2 + N2s + N3 and
R( s)
( sI − A) −1 =
( s + 1) 2 ( s + 2)
In this case, the minimal polynomial is the same as the characteristic polynomial,
i.e., ψA(s) = πA(s) = s3 + 4s2 +5s + 2.
The inverse of sI – A is
−1 R11 R12 R21
( sI − A) = + +
s + 1 ( s + 1) 2 s + 2
where the residue matrices are
1 ⎧d −1 ⎫
⎧ d ⎛ Is 2 + N 2 s + N 3 ⎞⎫
R = lim ⎨ ( s + 1) ( sI − A) ⎬ = lim ⎨ ⎜⎜
1
1
2
⎟⎟⎬ = −3I + 2 N 2 − N 3
1! s→ −1 ⎩ ds ⎭ s→ −1 ⎩ ds ⎝ s+2 ⎠⎭
⎡ 0 − 2 − 1⎤
= ⎢⎢ 2 5 2 ⎥⎥
⎢⎣− 4 − 8 − 3⎥⎦
110
⎡2 3 1⎤
⎧ Is + N 2 s + N 3 ⎫
2
⎢ ⎥
R1 = lim ⎨
2
⎬ = I − N 2 + N 3 = ⎢− 2 − 3 − 1⎥
s→ −1 ⎩ s+2 ⎭ ⎢⎣ 2 3 1 ⎥⎦
⎡1 2 1⎤
⎧ Is + N 2 s + N 3 ⎫
2
⎢ ⎥
R2 = lim ⎨
1
⎬ = 4 I − 2 N 2 + N 3 = ⎢ − 2 − 4 − 2⎥
s→ −2 ⎩ ( s + 1) 2
⎭ ⎢⎣ 4 8 4 ⎥⎦
Therefore, for t ≥ 0, the state transition matrix is given by
⎡− 1 1 0⎤
 = ⎢⎢ 0 − 1 0 ⎥⎥
⎢⎣ 0 0 − 2⎥⎦
Then, its characteristic polynomial is the same as that of the last example, i.e.,
π Aˆ ( s) = π A ( s) = ( s + 1) 2 ( s + 2) 111
In this case,
⎡e − t te − t 0 ⎤
ˆ ⎢ ⎥
e At =⎢ 0 e −t 0 ⎥
⎢0
⎣ 0 e − 2t ⎥⎦
But,
A = TAˆ T −1 ⇒ e At = Te AtT −1
ˆ
⎡λ i 1 0⎤ 0 L
where ⎢0
⎢ λi M ⎥⎥ 1
Ji = ⎢ M O O M ⎥ , J i ∈ ℜ ni xni
⎢ ⎥
⎢M O 1⎥
⎢⎣ 0 L L L λi ⎥⎦ 112
then
⎡e J1t 0 0 0 ⎤
⎢ ⎥
⎢ 0 e J 2t M ⎥
e =
Jt
⎢ M O M ⎥
⎢ J Pt ⎥
⎢⎣ 0 L L e ⎥⎦
where
⎡ λit λit t ni −1 λit ⎤
⎢e te L e ⎥
⎢ (ni − 1)! ⎥
e J it =⎢ 0 e λit M ⎥
⎢ M O M ⎥
⎢ ⎥
λit
⎢⎣ 0 L 0 e ⎥⎦
{
Theorem: The generalized eigenvectors e11 ,..., e1n1 ,..., e1p ,..., e pp
n
} of A associated
with the eigenvalues {λ1,…, λp} each with multiplicity ni, i = 1, 2, … , p, are
linearly independent.
⎡0 1 0⎤
⎢
Example: Let A = ⎢ 0 0 1 ⎥⎥ . We already know that σ(A) = {-1, -1, -2}.
⎢⎣− 2 − 5 − 4⎥⎦
114
Clearly, n1= 2 and n2 = 1. Moreover,
⎡1 1 0 ⎤ ⎡e111
⎤ ⎡1⎤
⎢ 1⎥ ⎢− 1⎥
( A − λ1 I )e11 = ⎢⎢ 0 1 1 ⎥⎥ ⎢e12 ⎥ = 0 ⇒ e1
1
= ⎢ ⎥
⎢⎣− 2 − 5 − 3⎥⎦ ⎢⎣e13 ⎥⎦
1
⎢⎣ 1 ⎥⎦
⎡ 2 1 0 ⎤ ⎡ e121 ⎤ ⎡1 ⎤
⎢ ⎥
( A − λ 2 I )e 12 = ⎢⎢ 0 2 1 ⎥⎥ ⎢e122 ⎥ = 0 ⇒ e 12 = ⎢⎢− 2⎥⎥
⎢⎣− 2 − 5 − 2⎥⎦ ⎢⎣e123 ⎥⎦ ⎢⎣ 4 ⎥⎦
⎡− 2 0 0 ⎤
Aˆ = T −1 AT = ⎢⎢ 0 − 1 1 ⎥⎥ = J
⎢⎣ 0 0 − 1⎥⎦
⎡e −2t 0 0 ⎤
ˆ ⎢ ⎥
e At = e JT =⎢ 0 e −t te −t ⎥
⎢ 0 0 e −t ⎥⎦
⎣
116
The matrix exponential of the original system is therefore given by
⎡ 1 1 1 ⎤ ⎡e 0 ⎤⎡ 1 2 1 ⎤
−2 t
0
⎢ ⎥
e At = Te J tT −1 = ⎢⎢ −2 −1 0 ⎥⎥ ⎢ 0 e−t te−t ⎥ ⎢⎢ −2 −5 −2 ⎥⎥
⎢⎣ 4 1 −1⎥⎦ ⎢⎣ 0 0 e− t ⎥⎦ ⎢⎣ 2 3 1 ⎥⎦
Discrete-Time Systems:
Consider the linear time-invariant discrete-time system
x (k + 1) = Ax ( k ) + Bu( k ), x 0 = x ( 0)
y ( k ) = C x ( k ) + Du ( k )
Taking the one-sided Z transform, yields
X ( z ) = z ( zI − A) −1 x0 + ( zI − A) −1 BU ( z )
y ( z ) = zC ( zI − A) −1 x0 + ⎡⎣ C ( zI − A) −1 B + D ⎤⎦ U ( z )
The state transition matrix is given by Φ(k) = Ak = Z-1{z(zI-A)-1}.
The transfer function matrix of a discrete-time linear dynamic system is defined by
H ( z ) = C ( zI − A) −1 B + D 117
On the other hand, the unit sample response matrix is then given by
h(k) = Z-1{H(z)}.
Def. M ∈ Rnxn is symmetric if MT = M.
Theorem: The eigenvalues of M = MT ∈ Rnxn are real.
Proof: Let λ be an eigenvalue of M and v its eigenvector. Then Mv = λ v. Now, if
v* is the complex conjugate transpose of v,
v*Mv = v*(λv) = λ(v*v)
But, v*(Mv) and v*v are real ⇒ λ must be real, ⇒ all eigenvalues of M must be
real. This can be verified from the fact that
(v*Mv)* = (Mv)*v = v*M*v = v*MTv = v*Mv.
⎡ 1 2 − 1 2⎤ ⎡4 0⎤
Q=⎢1 ⎥ D=⎢ ⎥
⎣ 2
1
2 ⎦ ⎣0 2⎦
Singular value decomposition (SVD)
Let H ∈ Rmxn and define M ≡ HTH ∈ Rnxn. Then M = MT ≥ 0. Let r be the total
number of positive eigenvalues of M, then we may arrange them such that
λ1 ≥ λ2 ≥ L ≥ λr > 0 = λr +1 = L = λn
Let p = min{m, n}, then the set {σ1 ≥ σ2 ≥ … ≥σr > 0 = σr+1 = … = σp} is called
the singular values of H, where σ i = λi and r = rank(H).
⎡ 2 1⎤
Example: Let a rectangular matrix H be given by H = ⎢− 1 2⎥
⎢ ⎥
⎢⎣ 2 4⎥⎦ 120
Then
⎡9 8 ⎤
M = HTH = ⎢ ⎥
⎣8 21⎦
Now, det(λI – M) = λ2 - 30λ + 125 ⇒ eigenvalues (M) = {25, 5} and the singular
values of H are the square root of the eigenvalues of M, i.e., {5, √5}.
Example: Let H now be described by
⎡ 4 0 0⎤
H =⎢ ⎥
⎣ 0 2 1 ⎦
Then ⎡16 0 0⎤
M = H T H = ⎢⎢ 0 4 2⎥⎥
⎢⎣ 0 2 1 ⎥⎦
and
⎡λ − 16 0 0 ⎤
det(λI − M ) = det ⎢⎢ 0 λ − 4 − 2 ⎥⎥ = (λ − 16)(λ − 5)λ
⎢⎣ 0 − 2 λ − 1⎥⎦
121
which implies that the set of eigenvalues of M is {16, 5, 0} ⇒ singular values of H
are {4,√5}, since min{m, n} = min{2, 3} = 2. Also, rank(H) = 2.
Theorem: Let H ∈ Rmxn, then H = RSQT with RTR = RRT = Im, QTQ = QQT = In,
and S ∈Rmxn with the singular values of H on its main diagonal and such that
QTHTHQ = D = STS with D a diagonal matrix with the squared singular values of
H on its main diagonal.
Example: Let H be given by H = ⎡⎢
4 0 0⎤
⎥
⎣0 2 1 ⎦
[
q~2 = 0 2
5
1
5
]
T
[
q~3 = 0 1
5
− 2
5
]
T
122
Thus, ⎡1 0 0 ⎤
⎢ ⎥
Q = [q~1 q~2 q~3 ] = ⎢0 2
5
1
5 ⎥
⎢0 1 − 2 5 ⎥⎦
⎣ 5
⎡1 0 0 ⎤
⎡4 0 0⎤ ⎢ ⎥ ⎡4 0 0⎤ ⎡λ1 0 0⎤
S = HQ = ⎢ 0 2 1 =⎢ =⎢
⎥ ⎢
⎣ 0 2 1 ⎦ ⎢0
5 5 ⎥
0 ⎥
5 0 ⎦ ⎣ 0 λ2 0⎥⎦
⎣
1
5
− 2 5 ⎥⎦ ⎣
⎡4 0 ⎤ ⎡16 0 0 ⎤ ⎡ λ1
2
0 0⎤
⎢ ⎥ ⎡4 0 0⎤ ⎢ ⎥=⎢0 ⎥
S T S = ⎢0 5⎥ ⎢ ⎥ = 0 5 0 ⎥ ⎢ λ22 0⎥
0 5 0⎦ ⎢
⎢0
⎣ 0 ⎥⎦ ⎣ ⎢⎣ 0 0 0 ⎥⎦ ⎢⎣ 0 0 λ32 ⎥⎦
∫ h(t ) dt ≤ M < ∞
0
∞ ∞ ∞ ∞
y (t ) = ∫ h(τ )u (t − τ )dτ ≤ ∫ h(τ ) u (t − τ ) dτ ≤ ∫ h(τ ) k1dτ = k1 ∫ h(τ ) dτ ≤ k1M < ∞
0 0 0 0
⇒y(t) is bounded.
Suppose h(t) is not absolutely integrable. Then for a causal, linear time-invariant
system, with u(t) = k1 > 0 and h(t) > 0, t ≥ 0, with nondecreasing envelope. 124
t
the output is given by y (t ) = ∫ h(τ )u (t − τ )dτ
0
For t ≥ 0 , this implies that
t
y (t ) = k1 ∫ h(τ ) dτ
0
t
as t → ∞, ∫ h(τ ) dτ → ∞
0
⇒y(t) is not bounded even when u(t) is bounded. Thus, h(t) must be absolutely
integrable.
∑n
i =1
i =n
125
we get m ni
kij
H ( s )= ∑∑
i =1 j =1 ( s + pi ) j
and the impulse response is given by,
⎧ m
1ni
⎫ m ni kij
h(t ) = L {H ( s)} = ∑∑ kij L ⎨
−1 −1
⎬ = ∑∑ t j −1 − pit
e
i =1 j =1 ⎩ ( s + pi ) j
⎭ i=1 j =1 ( j − 1)!
t ≥ 0, which is absolutely integrable.
∫ h (t ) dt = K
0
ij ij <∞
Let Hin(s) ≡ (sI - A)-1 be the internal transfer function matrix of some continuous
time LTI dynamic system. Then, with u(t) = 0 and for some
~
x (0) 2 = k < ∞ 127
we get ~
x (t ) 2 = e At ~
x (0) < ∞
2
and
1. The unforced system is marginally stable if and only if the poles of Hin(s) (or the
eigenvalues of A) have either zero or negative real parts, and those with zero real
parts are simple roots of the minimal polynomial of A
2. The unforced system is asymptotically stable if and only if all the poles of Hin(s)
(all eigenvalues of A) lie strictly on the left half of the s-plane.
&x (t ) = ⎡− 2 2 ⎤~
1
~
⎢ 2 − 1 ⎥ x (t )
⎣ 2⎦
128
then ⎡ s + 12 1
2 ⎤
⎢ s( s + 5 ) s( s + 5 ) ⎥
H in ( s ) = ⎢ 4 4
⎥
⎢ 2 s + 2 ⎥
⎢⎣ s( s + 5 4 ) s ( s + 5 4 ) ⎥⎦
⇒ the poles of Hin(s) are located at s = 0 and s = -5/4 ⇒ system is marginally
stable.
Suppose now that the input stimulus is nonzero and that the initial condition is
zero, i.e., ~x (0) = ~0 and u~ (t ) ≠ ~0 .
~ ~
Then X ( s ) = ( sI − A) −1 BU ( s )
t
and ~
x (t ) = ∫ e A(t −τ ) Bu~(τ )dτ
0
Furthermore,
~
[ ~
] ~
Y ( s ) = C ( sI − A) −1 B + D U ( s ) = H ( s )U ( s )
129
Clearly, every pole of H(s) is an eigenvalue of A ⇒ if every eigenvalue of A has a
negative real part, then all poles of H(s) lie on the left-half of the s-plane ⇒ the
system described by
~ ~
x& (t ) = A~
x (t ) + Bu~ (t ) , ~
x ( 0) = 0 ⊗
~
y (t ) = C~
x (t ) + Du~ (t )
x&1 (t ) = x2 (t ) − 2u (t )
x&2 (t ) = x1 (t ) + 2u (t )
y(t ) = x2 (t ) 130
In block diagram form,
x& 1 x& 2
x (t ) = [x1 (t ) x2 (t )]
Let ~
T
&x (t ) = ⎡0 1⎤ ~
~ ⎡ − 2⎤
⎢1 0⎥ x (t ) + ⎢ 2 ⎥ u (t )
⎣ ⎦ ⎣ ⎦
y (t ) = [0 1]~
x (t )
131
Thus,
⎡ λ − 1⎤
π A (λ ) = det(λI − A) = det ⎢ ⎥ = λ2
− 1 = (λ + 1)(λ − 1)
⎣− 1 λ ⎦
and the transfer function of the system is given by
1 ⎡ s 1⎤ ⎡− 2⎤
H ( s) = C ( sI − A) −1 B = [0 1]⎢1 s ⎥ ⎢ 2 ⎥
s2 −1 ⎣ ⎦⎣ ⎦
− 2 + 2s 2( s − 1) 2
= = =
s2 −1 ( s + 1)( s − 1) s + 1
h(t ) = 2e − t u s (t )
where
⎧1 t > 0
u s (t ) = ⎨
⎩0 t < 0
Therefore, the system is BIBO stable. However, the system is internally unstable
because of the eigenvalue at λ = 1! 132
In fact, ⎧⎡ s 1 ⎤⎫ ⎧⎡ 12 1 1
⎤⎫
1
⎪ 2 ⎪ ⎪ + 2 2
− 2
{
Φ (t ) = L−1 ( sI − A) −1 } −1 ⎪ ⎢ s − 1
= L ⎨⎢ s − 1⎥ ⎪ = L−1 ⎪⎢ s + 1 s − 1
2
s − 1 s + 1⎥ ⎪⎪
1 s ⎥⎬ ⎨⎢ 1 1 1 1 ⎥⎬
⎪⎢ 2 ⎥ ⎪ ⎪ ⎢ 2
− 2 2
+ 2
⎥⎪
⎪⎩⎣ s − 1 s 2 − 1⎦ ⎪⎭ ⎪⎩⎣ s − 1 s + 1 s + 1 s − 1⎦ ⎪⎭
⎡ 1 −t 1 t ⎤
⎢2 ( e + e t
) (e − e −t ) ⎥
=⎢ 2
1 1 −t ⎥
⎢ (e t − e − t ) (e + e ) ⎥
t
⎣2 2 ⎦
Lyapunov stability
Consider the following LTI, continuous time, unforced dynamic system,
~
x& (t ) = A~
x (t )
A ∈ Rnxn. Then the system (A) is asymptotically stable if every eigenvalue of A
has a negative real part.
133
Thoerem: Let σ(A) = {λ1, … , λn} be the spectrum of A. Then Re{λi} < 0, i =
1, … , n, if and only if for any given N = NT > 0, the Lyapunov equation
ATM + MA = - N
has a unique solution M = MT > 0.
Example: Let the system matrix be described by
⎡0 1 0⎤
A = ⎢⎢ 0 0 1 ⎥⎥
⎢⎣− 6 − 11 − 6⎥⎦
then σ(A) = {-1, -2, -3}.
⎡1 0 0⎤
Let N = ⎢0 2 0⎥ then N = NT > 0, since σ(N) = {1, 2, 3}.
⎢ ⎥
⎢⎣0 0 3⎥⎦
⎡ 1.858 − 0.5 − 1.108⎤
Solving the Lyapunov equation, yields M = ⎢ − 0.5 1.108 − 1 ⎥ = MT > 0
⎢ ⎥
⎢⎣− 1.108 − 1 3.192 ⎥⎦ 134
The above matrix M is positive definite since σ(M) = {0.182, 2.005, 3.971}.
∑ h( n) ≤ M < ∞
n =0
But, σ(M) = {2.753, 4.872, 9.437} ⇒ M > 0 ⇒ all eigenvalues of A must lie inside
the unit circle.
Def. A state x0 = x(t0) ∈ Rn is controllable over [t0, t1] if there exists an input u(•)
defined over [t0, t1] such that
t1
Corollary: Let Q = [B AB … An-1B]. Then for any x ∈ Col-sp [Q], Ax ∈ Col-sp [Q],
i.e., the column space of Q is A-invariant.
Let rank[Q] = p < n, U1 be an nxp matrix whose columns form a basis for the
column space of Q and let U2 be an nx(n-p) matrix whose columns together with
those of U1 form a basis for Rn, i.e., Col-sp [U1 U2] = Rn.
Proposition: Given dx/dt = Ax + Bu, the state transformation [U1 U2]z = x, yields
~ ~
⎡ z&1 ⎤ ⎡ A11 A12 ⎤ ⎡ z1 ⎤ ⎡ B1 ⎤
⎢ z& ⎥ = ⎢ ~ ⎥⎢ ⎥ + ⎢ ⎥u ⊗
⎣ 2 ⎦ ⎣ 0 A22 ⎦ ⎣ z 2 ⎦ ⎣ 0 ⎦ 138
~ ~
where rank [ B1 A11 B1 L A11p −1 B1 ] = p , and eq. ⊗ is known as the Kalman
controllable form.
Proof: [U 1 U 2 ]z& = x& = Ax + Bu = A[U 1 U 2 ]z + Bu = [ AU 1 AU 2 ]z + Bu
Now, AU1 ∈ Col-sp [Q] and AU 1 = [ Au~1 Au~2 L Au~p ]
~
⇒ each column of AU1 is a linear combination of the columns of U1 or, AU 1 = U 1 A11
for an appropriate A ~ ∈ Rpxp. Each column in AU is in Rn since the columns of
11 2
~ and ~ such that
[U1 U2] are a basis in Rn, therefore, there must exists matrices A12 A22
~
⎡ A12 ⎤ ~ ~
AU 2 = [U1 U 2 ] ⎢ ~ ⎥ = U1 A12 + U 2 A22
⎣ A22 ⎦
~ ~
Thus, ⎡ A11 A12 ⎤
[U 1 U 2 ]z& = [U 1 U 2 ] ⎢ ~ ⎥ z + Bu
⎣ 0 A22 ⎦
~ ~
or ⎡ A11 A12 ⎤
z& = ⎢ ~ ⎥ z + [U 1 U 2 ]−1
Bu
⎣ 0 A22 ⎦
139
By construction, Col-sp [Q] = Col-sp [B AB … An-1B] ⊃ Col-sp [B], thus, there
exists a pxm matrix B1 such that B = U1B1 or
⎡B ⎤ ⎡B ⎤
B = [U1 U 2 ] ⎢ 1 ⎥ ⇒ [U1 U 2 ] B = ⎢ 1 ⎥
−1
⎣0⎦ ⎣0⎦
~ ~
⎡ A11 A12 ⎤ ⎡ B1 ⎤
or z& = ⎢ ~ ⎥ z + ⎢ ⎥u
⎣ 0 A22 ⎦ ⎣0⎦
where ~ ~ ~
Q = [ B1 A11 B1 L A11p−1 B1 ]
140
~
Proof: Suppose that rank [ Q ] = p and that there exists v ≠ 0, v ∈ Rp such that
[ ]
~T
Let c (τ ) ≡ B1T e − A11τ v ≡ c1 (τ ) L c p (τ ) T , then
t1 t1
[
v Kv = ∫ c (τ )c (τ )dτ = ∫ c12 (τ ) + L + c 2p (τ ) dτ = 0
T T
]
0 0
dτ j τ =0
Thus, ~ ~ ~
v T Q = v T [ B1 A11 B1 L A11p −1 B1 ] = 0 T
~ ~
This implies that rank [ Q ] ≠ p, which contradicts the hypothesis that rank [Q ] = p,
hence, K is nonsingular.
Example: Consider the dynamic system described by
⎡1 0 ⎤ ⎡0⎤
x& = ⎢ ⎥ x + ⎢ ⎥u
⎣0 − 1⎦ ⎣1⎦
Clearly, the system is not controllable, as the input u does not affect x1. Now,
⎡0 0 ⎤
Q = [A AB ] = ⎢ ⎥
⎣1 − 1⎦
rank [Q] = 1 ≠ 2 ⇒ system is not controllable.
Let U1 = [0 1]T. Since [0 1]T spans Col-sp [Q], then, if U2 = [1 1]T, the columns
142
of [U1 U2] span R2.
Now, ~ ~
⎡− 1 − 2⎤ ⎡ A11 A12 ⎤
[U1 U 2 ]−1 A[U1 U2] = ⎢ ⎥=⎢ ~ ⎥
⎣ 0 1 ⎦ ⎣0 A22 ⎦
⎡1⎤
and [U1 U2]-1B =⎢ ⎥
⎣0 ⎦
Therefore, the equivalent system in the Kalman controllable canonical form is
described by
⎡− 1 − 2⎤ ⎡1⎤
&z = ⎢ ⎥ z + ⎢ ⎥u
⎣0 1 ⎦ ⎣0 ⎦
t0
vTB = 0T.
Now,
v T A k = w i* A k = ( w i* A) A k −1 = λi w i* A k −1 = λi ( w i* A) A k − 2 = λi2 w i* A k − 2 = L = λik w i*
Hence, rank [Q] < n, which contradicts the hypothesis that rank [Q] = n, hence, 3
⇒ 2. The proof that 2 ⇒ 3 basically uses the same type of arguments. 144
Assume that rank [e-AtB] ≠ n, i.e., there exists v ≠ 0 such that vTe-AtB = 0T.
k
But, d T − At ~ T, ∀ k ≥ 0 ⇒ vTQ = 0T, which contradicts
(v e B) t =0 = (−1) v A B = 0
k T k
dt k
the assumption that rank [Q] = n. Thus, rank [Q] = n ⇒ rank [e-AtB] = n, i.e., e-AtB
has n linearly independent rows.
Suppose K̂ is not positive definite. From the previous lemma, K̂ is also not
negative definite, thus, K̂ can only be singular (positive semi-definite). Therefore,
there exists v ≠ 0 such that vT K̂ = 0T ⇒ vT K̂ v = 0 or
t1
ˆ
0 = v Kv = ∫ v e
T T A ( t1 −τ ) T AT ( t1 −τ )
BB e vdτ
t0
t 0 −t1
let α = τ - t1, then ˆ
v Kv = − ∫ v e BB e
T T − Aα T − AT α
vdα = 0
0
This implies that the integrand ≡ 0. ⇒ vTe-AαB = 0T ⇒ rows of e-AtB are linearly
dependent which contradicts that rank[e-AtB] = n. Therefore, rank [e-AtB] = n ⇒ K̂ is
positive definite. 145
Finally, t1
=e A ( t1 −t 0 )
x(t 0 ) − ∫ e A ( t1 −τ ) T
BB e AT ( t1 −τ )
[ ]
Kˆ −1 e A(t1 −t0 ) x(t 0 ) − x(t1 ) dτ
t0
t1
=e A ( t1 −t 0 )
x(t 0 ) − ∫ e A ( t1 −τ ) T
BB e AT ( t1 −τ )
[
dτKˆ −1 e A(t1 −t0 ) x(t 0 ) − x(t1 ) ]
t0
[
= e A(t1 −t0 ) x(t 0 ) − Kˆ Kˆ −1 e A(t1 −t0 ) x(t 0 ) − x(t1 ) = x(t1 ) ]
Example: Consider the following linear time-invariant dynamical system
⎡− 1 0 − 1⎤ ⎡1 1⎤
x& = ⎢⎢ 0 0 1 ⎥⎥ x + ⎢⎢0 0⎥⎥u
⎢⎣ 1 − 1 − 1⎥⎦ ⎢⎣0 0⎥⎦
then, ⎡1 1 − 1 − 1 0 0⎤
Q = ⎢⎢0 0 0 0 1 1 ⎥⎥
⎢⎣0 0 1 1 − 2 − 2⎥⎦ 146
Hence, rank [Q] = 3 = n. Moreover, the spectrum of A is σ(A) = {-0.4302, -0.7849
± j 1.3071} = {λ1, λ2, λ3}, which means that
⎡0.5698 0 1 1 1⎤
rank [(λ1I – A) | B] = rank ⎢ 0 − 0.4302 − 1 0 0 ⎥=3
⎢ ⎥
⎢⎣ − 1 1 0.5698 0 0⎥⎦
since the third column is a linear combination of the first and second columns.
Likewise, rank [(λiI – A) | B] = 3, i = 2, 3.
Discrete-Time Systems
147
Example: Consider the following linear shift-invariant discrete-time dynamic
system
⎡ − 1 0⎤ ⎡0 ⎤
x(k + 1) = ⎢ ⎥ x ( k ) + ⎢ ⎥ u (k )
⎣ 1 0⎦ ⎣1⎦
then if x(0) = [0 0]T and u(0) = α, x(1) = [0 α]T, a controllable state.
Now,
⎡0 0 ⎤
Q = [B AB ] = ⎢ ⎥
⎣1 0 ⎦
⇒ rank [Q] = 1!
⎡ 0 1 0 L 0 ⎤
⎢ ⎥ ⎡0 ⎤
⎢ 0 0 1 M ⎥ ⎢0 ⎥
x& = ( A + BF ) x + Bu r = ⎢ M O ⎥ x + ⎢ ⎥u r
⎢ ⎥ ⎢M⎥
⎢ O ⎥ ⎢ ⎥
⎣1⎦
⎢⎣− ( a n − f n ) L ⎥
− (a1 − f1 )⎦
(A, B) controllable ⇒ rank [Q] = n, Q ∈ Rnxn ⇒ Q-1 exists. Let v be the last row of
Q-1 and z = Vx, where
⎡ v ⎤
⎢ vA ⎥
V =⎢ ⎥
⎢ M ⎥
⎢ n −1 ⎥
⎣vA ⎦
The new system is given by
z& = VAV −1 z + VBu
Claim 1: VB = Bc = [0 0 … 0 1]T.
Proof: ⎡ v ⎤ ⎡ vB ⎤
⎢ vA ⎥ ⎢ vAB ⎥
VB = ⎢ ⎥B = ⎢ ⎥
⎢ M ⎥ ⎢ M ⎥
⎢ n −1 ⎥ ⎢ n −1 ⎥
⎣ vA ⎦ ⎣vA B ⎦ 151
But,
Q −1Q = I ⇒ vQ = v[ B AB L A n −1 B] = [vB vAB L vA n −1 B] = [0 L 0 1]
since v is the last row of Q-1. Therefore, VB = [0 … 0 1]T.
⎡ 0 1 0 L0 ⎤
⎢ 0 0 1 M ⎥⎥
⎢
Claim 2: VAV-1 = Ac = ⎢ M O ⎥
⎢ ⎥
⎢ O 1 ⎥
⎢⎣− an L − a1 ⎥⎦
⎡ 0 1 0 0 ⎤
L
⎡ vA ⎤ ⎢
⎢vA 2 ⎥ ⎢ 0 0 1 M ⎥⎥
VA = ⎢ ⎥=⎢ M O ⎥V
⎢ M ⎥ ⎢ ⎥
⎢ n⎥ ⎢ O 1 ⎥
⎣vA ⎦ ⎢− a L − a1 ⎥⎦ 152
⎣ n
From Cayley-Hamilton,
πA(A) = An + a1An-1 + … + an-1A + anI = 0.
AcV = ⎢ M O ⎥⎢ ⎥=⎢ ⎥
⎢ ⎥⎢ M ⎥ ⎢ M ⎥
⎢ O 1 ⎥ ⎢ n −1 ⎥ ⎢ n −1 ⎥
⎣ vA −
⎦ ⎣ n a v − a vA − L − a vA ⎦
⎢⎣− a n − a1 ⎥⎦
n −1 1
− a n −1 L
153
Design Procedure:
⎡− 1 1 0 ⎤ ⎡1⎤
x& = ⎢⎢ 0 − 4 2 ⎥⎥ x + ⎢⎢ 0 ⎥⎥ u
⎢⎣ 0 0 − 10⎥⎦ ⎢⎣− 1⎥⎦
Moreover,
⎡ v ⎤ ⎡ 0.04 0.18 0.04 ⎤
V = ⎢⎢ vA ⎥⎥ = ⎢⎢− 0.04 − 0.68 − 0.04⎥⎥
⎢⎣vA 2 ⎥⎦ ⎢⎣ 0.04 2.68 − 0.96⎥⎦
Hence,
⎡ 0 1 0 ⎤
Ac = VAV −1 = ⎢⎢ 0 0 1 ⎥⎥
⎢⎣− 40 − 54 − 15⎥⎦
⇒ π Ac (λ ) = π A (λ ) = λ3 + 15λ2 + 54λ + 40
and the spectrum of A is given by σ(A) = {-1, -4, -10}.
Let Fc = [ f c 3 f c2 f c1 ] .
155
Then the closed-loop system characteristic polynomial is given by
⇒fc3 = -2460.
156
Observability Revisited:
Consider the dynamic system described by
x& = Ax + Bu
⊗
y = C x + Du
Def. x(t0) ∈ Rn is observable if and only if there exists t1 > t0 such that the
knowledge of y(t) and u(t) over [t0, t1] and of the system matrices A, B, C and D is
sufficient to determine x(t0).
Def. The pair (C, A) in eq. ⊗ is (completely) observable if and only if every x ∈ Rn
is observable.
⎡ C ⎤
2. Rank ⎢
λ I − A⎥ = n, for each eigenvalue λi of A
⎣ i ⎦ 157
⎡ C ⎤
⎢ CA ⎥
3. Rank [ϕ] = n, where ϕ is the observability matrix ϕ = ⎢ ⎥
⎢ M ⎥
⎢ n−1 ⎥
⎣CA ⎦
4. Rank [CeAt] = n, i.e., CeAt has n linearly independent columns each of which is a
vector-valued function of time defined over [0, ∞)
t
5. The observability Grammian matrix W0 (t0 , t1 ) = ∫ e ATτ
C T Ce Aτ dτ
t0
is nonsingular ∀ t1 > t0.
Example: Consider the linearized model of an orbital satellite described by
⎡ 0 1 0 0⎤ ⎡0 0⎤
⎢0.75 0 0 1⎥⎥ ⎢1 0⎥⎥
x& = ⎢ x+⎢ u
⎢ 0 0 0 1⎥ ⎢0 0 ⎥
⎢ ⎥ ⎢ ⎥
⎣ 0 −1 0 0⎦ ⎣0 1⎦
158
⎡1 0 0 0⎤
y=⎢ ⎥ x
⎣0 0 1 0 ⎦
⎡ 1 0 0 0⎤
⎢ 0 0 1 0 ⎥⎥
⎢
⎢ 0 1 0 0⎥
⎢ ⎥
⎢ 0 0 0 1⎥
ϕ=
⎢ 0.75 0 0 1⎥
⎢ ⎥
⎢ 0 −1 0 0⎥
⎢ 0 − 0.25 0 0⎥
⎢ ⎥
⎢⎣− 0.75 0 0 − 1⎥⎦
⎡ C ⎤
For λ1 = λ2 = 0, rank ⎢ ⎥ = 4,
⎣λi I − A⎦ 159
Since
⎡ 1 0 0 0⎤
⎢ 0
⎢ 0 1 0 ⎥⎥
⎡ C ⎤ ⎡C ⎤ ⎢ 0 −1 0 0⎥
⎢λ I − A⎥ = ⎢− A⎥ = ⎢− 0.75 0 ⎥
0 − 1⎥
⎣ i ⎦ ⎣ ⎦ ⎢
⎢ 0 0 0 − 1⎥
⎢ ⎥
⎣⎢ 0 1 0 0 ⎦⎥
Def. Suppose rank [ϕ] < n, then N[ϕ] is the unobservable subspace of the state
space. Therefore, if x0 ∈ N[ϕ] then we cannot reconstruct x0 from input-output
measurements.
Proposition: Let x0 ∈ N[ϕ], then Ax0 ∈ N[ϕ]. (The unobservable subspace is A
invariant).
160
⎡ C ⎤
⎢ CA ⎥
Proof: If x0 ∈ N[ϕ], then ϕ x0 = ⎢ ⎥ x = 0 ⇒ Cx = CAx = … = CAn-1x = 0
⎢ M ⎥ 0 0 0 0
⎢ n−1 ⎥
⎣CA ⎦
Now,
⎡ CA ⎤ ⎡ CAx 0 ⎤ ⎡ 0 ⎤
⎢CA 2 ⎥ ⎢CA 2 x ⎥ ⎢ 0 ⎥
ϕAx0 = ⎢ ⎥x = ⎢
0
0⎥
=⎢ ⎥
⎢ M ⎥ ⎢ M ⎥ ⎢ M ⎥
⎢ n⎥ ⎢ n ⎥ ⎢ n ⎥
⎣CA ⎦ ⎣CA x 0 ⎦ ⎣CA x 0 ⎦
⎡ z&o ⎤ ⎡ Ao M 0 ⎤ ⎡ zo ⎤ ⎡ Bo ⎤
⎢− − ⎥ = ⎢− − − − − ⎥ ⎢− − ⎥ + ⎢ − − ⎥ u
⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢⎣ z&uo ⎥⎦ ⎢⎣ A21 M Auo ⎥⎦ ⎢⎣ zuo ⎥⎦ ⎢⎣ Buo ⎥⎦
⎡ zo ⎤
[
y = Co 0 ⎢ ⎥]
⎣ zuo ⎦
where zo is observable and zuo is unobservable, i.e., the pair ( Co , Ao ) is
observable.
162
Example: Consider the dynamic system described by
⎡2 0 1 ⎤ ⎡0 ⎤
x& = ⎢⎢0 − 2 0⎥⎥ x + ⎢⎢1⎥⎥ u
⎢⎣1 0 2⎥⎦ ⎢⎣1⎥⎦
⎡1 0 1⎤
y=⎢ ⎥ x
⎣1 0 - 1⎦
The observability matrix is given by
⎡1 0 1⎤
⎢1 0 − 1⎥⎥
⎢
⎢3 0 3⎥
ϕ=⎢ .
1 0 − 1⎥
⎢ ⎥
⎢9 0 9⎥
⎢1 0 − 1⎥⎦
⎣
Clearly, the rank of the observability matrix is 2 → system is not observable.
163
Let the observable subspace be described by T1 = CT and T2 = [1 1 1]T. Then,
the similarity transformation T is given by:
⎡1 0 1 ⎤
T = ⎢⎢1 0 − 1⎥⎥.
⎢⎣1 1 1 ⎥⎦
⎡3 0 ⎤M 0 ⎡ 0.5 ⎤
⎡ zo ⎤ ⎢
& ⎥ ⎡ zo ⎤ ⎢ 1 ⎥
⎢− − ⎥ = ⎢ 0 M 1 0
⎥ ⎢− − ⎥ + ⎢ ⎥u
⎢ ⎥ ⎢− − − −⎥ ⎢ ⎥ ⎢ −− ⎥
⎢⎣ z&uo ⎥⎦ ⎢ ⎥ ⎢⎣ zuo ⎥⎦ ⎢ ⎥
0 M − 2⎦
⎣5 ⎣ − 0 . 5 ⎦
⎡ zo ⎤
⎡1 0 0 ⎤⎢ ⎥
y=⎢ ⎥ ⎢ − −⎥.
⎣0 1 0 ⎦ ⎢ z ⎥
⎣ uo ⎦
164
Theorem: The model (A, B, C) is completely controllable (observable) if and
only if (AT, CT, BT) is completely observable (controllable).
But,
⎡ BT ⎤
⎢ T T ⎥
B A
rank ⎢ ⎥ = rank [QT] = n ⇒ (AT, CT, BT) is completely observable.
⎢ M ⎥
⎢ T T n−1 ⎥
⎢⎣ B ( A ) ⎥⎦
165
⎡ C ⎤
⎢ CA ⎥
If (A, B, C) is completely observable, then rank [ϕ] = rank ⎢ ⎥ = n.
⎢ M ⎥
⎢ n−1 ⎥
⎣CA ⎦
Now, rank [CT ATCT … (AT)n-1CT] = rank [ϕT] = n ⇒ (AT, CT, BT) is completely
controllable.
Suppose now that (AT, CT, BT) is completely observable, then rank [ϕ1] is equal to
⎡ BT ⎤
⎢ T T ⎥
B A ⎥ = n. Now, rank [B AB … An-1B] = rank [ϕ1T] = n ⇒ (A, B, C)
rank ⎢
⎢ M ⎥
⎢ T T n−1 ⎥
⎢⎣ B ( A ) ⎥⎦
is completely controllable.
If ε(0) = 0, i.e., xe(0) = x(0), then xe(t) = x(t) ∀ t regardless of α. If, on the other
hand, ε(0) ≠ 0, then depending on α, |ε(t)| may or may not go to zero, i.e., xe(t)
may or may not converge asymptotically to x(t). 167
More realistically, however, ξ ≠ 1 or y ∝ x (the measurement is proportional to the
state).
Let x& e = α xe + k ( y − ξ xe ) + β u
then, if ε = x - xe we get
ε& = x& − x& e = α ε − k ( y − ξ x e ) = α ε − kξ ( x − x e )
= (α − kξ )ε ⇒ ε (t ) = e (α − k ξ ) t ε (0)
⇒ε(t) → 0 as t → ∞ if k is chosen properly.
with this choice of observer structure, [x(t) - xe(t)] = e(A – KC)t [x(0) - xe(0)] and in
order for xe → x as t gets large, K must be chosen in such a way that the
eigenvalues of A – KC lie strictly on the left half of the s-plane.
169
Theorem: The pair (C, A) is completely observable ⇒ σ (A – KC) can be arbitrarily
assigned by the proper choice of K.
Proof: (C, A) observable ⇒ (AT, CT) controllable (by duality) ⇒ σ(AT + CTK1) can
be arbitrarily chosen by properly choosing K1. Let K = -K1T, then σ(A – KC) can be
arbitrarily assigned because σ(AT + CTK1) = σ(A – KC).
Example: Obtain a full-order state estimator for the 2nd order system
⎡0 1 ⎤ ⎡0 ⎤
&x = ⎢ ⎥ x + ⎢ ⎥u
⎣1 0⎦ ⎣1 ⎦
y = [0 1]x
such that σ(A – KC) = {-5, -5}.
⎡ C ⎤ ⎡0 1 ⎤
ϕ= ⎢ ⎥ = ⎢ ⎥ , ⇒ rank [ϕ] = 2 ⇒ system is observable.
⎣CA⎦ ⎣1 0⎦
Now, σ(A) = {-1, 1} ⇒ system is unstable. 170
Let
⎡0 1 ⎤ ⎡ k1 ⎤ ⎡0 ⎤
x& e = ⎢ ⎥ x e + ⎢ ⎥[ y − (0 1) x e ] + ⎢ ⎥u
⎣1 0⎦ ⎣k 2 ⎦ ⎣1⎦
then,
⎡⎛ 0 1 ⎞ ⎛ k1 ⎞ ⎤ ⎡ 0 1 − k1 ⎤ ⎡0 − (k1 − 1)⎤
ε& = ⎢⎜⎜ ⎟⎟ − ⎜⎜ ⎟⎟(0 1)⎥ ε = ⎢ ⎥ ε=⎢ ⎥ ε
⎣⎝ 1 0 ⎠ ⎝ k 2 ⎠ ⎦ ⎣1 − k 2 ⎦ ⎣1 − k2 ⎦
⎡26⎤
Hence, K = ⎢ ⎥.
⎣10 ⎦
0.6
x1
x 1e
0.5
0.4
state x 1
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
time (sec)
Full State Estimator
1.4
1.2
0.8
state x 2
0.6
x2
0.4 x 2e
0.2
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
time (sec)
173
Feedback From State Estimates:
Let a linear time-invariant dynamic system be described by
x& = Ax + Bu
y = C x + Du ⊗
x& e (t ) = Ax e (t ) + K ( y (t ) − Cx e (t )) + B(− Fx e (t ) + ur (t ))
174
In block diagram form, the complete system is shown below
175
It can be shown that the above system has the same eigenvalues and the same
transfer function as the system with control signal
u(t ) = − F x (t ) + ur (t )
In augmented form,
⎡ x& ⎤ ⎡ A − BF ⎤ ⎡ x ⎤ ⎡ B⎤
⎢ x& ⎥ = ⎢ KC A − KC − BF ⎥ ⎢ x ⎥ + ⎢ B ⎥ ur
⎣ e⎦ ⎣ ⎦⎣ e ⎦ ⎣ ⎦
⎡x⎤
y = [C 0] ⎢ ⎥
⎣ xe ⎦
Consider the following nonsingular similarity transformation
⎡ x⎤ ⎡ x ⎤ ⎡I 0 ⎤⎡ x ⎤ ⎡x⎤
⎢ ε ⎥ = ⎢ x − x ⎥ = ⎢I ⎥ ⎢ ⎥ =T⎢ ⎥
⎣ ⎦ ⎣ e⎦ ⎣ − I ⎦⎣ xe ⎦ ⎣ xe ⎦
176
Then, the new equivalent system is given by
⎡ x& ⎤ ⎡ A − BF BF ⎤ ⎡ x ⎤ ⎡ B ⎤
⎢ ε& ⎥ = ⎢ 0 ⎥ ⎢ ⎥ + ⎢ ⎥ ur (t )
A − KC ⎦ ⎣ ε ⎦ ⎣ 0 ⎦
⎣ ⎦ ⎣
⎡ x⎤
y = [C 0]⎢ ⎥
⎣ε ⎦
To show that the estimator does not affect the location of the eigenvalues of the
original state feedback system, we calculate the augmented system’s transfer
function.
⎡ A − BF BF ⎤ ⎡B⎤
A ≡
Let overall ⎢ ⎥ , Boverall ≡ ⎢ ⎥ and Coverall ≡ [C 0].
⎣ 0 A − KC ⎦ ⎣0⎦
177
where
(sI − Aoverall )
−1 ⎛ (sI1 − ( A − BF ) )−1
= ⎜⎜
(sI1 − ( A − BF ) )−1 BF (sI1 − ( A − KC ) )−1 ⎞⎟
0 (sI1 − ( A − KC ) )−1 ⎟
⎝ ⎠
So,
Y ( s ) = C (sI1 − ( A − BF ) ) BU ( s ) = H ( s)U ( s ) .
−1
Which establishes that the transfer function of the original closed-loop system
does not change with the introduction of the state estimator in the loop.
Example: Consider the same system of the last example. This system is unstable
with eigenvalues at -1 and 1. Let the desired closed-loop system have eigenvalues
at -2 ± j2, i.e., the desired characteristic polynomial is given by
π A− BF (λ ) = (λ + 2 + j 2)(λ + 2 − j 2) = λ2 + 4λ + 8
⎛ λ −1 ⎞
= det(λI − ( A − BF )) = det⎜⎜ ⎟⎟
⎝ f1 − 1 λ + f 2 ⎠
= λ2 + f 2 λ + f1 − 1
178
This implies that f1 = 9 and f2 = 4.
Let us now apply the feedback control based on the estimates rather than on the
actual values of the states, i.e.,
⎡ xe1 (t ) ⎤
u (t ) = −[9 4] ⎢ ⎥ + ur (t ) .
x
⎣ e2 ⎦(t )
Then, the performance of the estimator-based closed-loop system is shown in the
following figure when the reference input is a unit step and the initial values of
the system state and that of the estimator are
⎡0 ⎤ ⎡ .5 ⎤
x (0) = ⎢ ⎥ and xe (0) = ⎢ ⎥ ,
⎣0 ⎦ ⎣− .2⎦
respectively.
179
System with estimated state feedback control
Estimator state feedback controlled system with unit step reference input
0.6
0.4
0.2
0
states x 1 and x 2
-0.2
-0.4
-0.6
-0.8 x1
x2
-1
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
time (sec)
180
System with actual state feedback control
0.14
0.12
0.1
states x 1 and x 2
0.08 x1
x2
0.06
0.04
0.02
-0.02
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
time (sec)
181