Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
D-‐‑ITET, Semester 4
Notes 0: Introduction
John Lygeros
0.2
Where, when & what?
• Exercises
– Examples papers, ~ 4 exercises on the topic of the week
– Discussed in examples classes
– In addition, exercises in lecture notes
– Neither part of class credit
– Integral part of the learning experience nonetheless
– Example paper exercises in style of final exam questions
– Please try to do them and discuss with instructor and
assistants if you have questions
– Please aLend examples classes
– Feel free to submit your solutions for grading
0.3
Reading material
• Lecture notes
– Slides handout, available on class webpage & Moodle
– Blackboard notes
• Recommended book
– K.J. Åström and R.M. Murray: “Feedback Systems: An Introduction for
Scientists and Engineers”, Princeton U.P., 2008
– hLp://www.cds.caltech.edu/~murray/amwiki/index.php/Main_Page
• Other excellent books
– G.F. Franklin, J.D. Powell, and A. Emami-‐‑Naeini, “Feedback control of
dynamical systems”, Prentice-‐‑Hall, 2006 (also used in Regesysteme I/II)
– J. Hespanha, “Linear Systems Theory”, Princeton U.P., 2009
– T. Kailath, “Linear systems”, Prentice-‐‑Hall, 1980
– E.A. Lee and P. Varaiya, “Structure and interpretation of signals and
systems”, Addison-‐‑Wesley, 2002
0.4
The flipped classroom concept
• ETH TORQUE pilot course in 2014
– Tiny, Open-‐‑with-‐‑Restrictions courses focused on QUality and
Effectiveness
– “Flipped classroom” concept
– Use of web and mobile technology before, during, and after lecture
0.5
Moodle: Learning management
• Official website for the Signals and Systems II
– hLps://moodle-‐‑app2.let.ethz.ch
• Log in using your ETH account and register for the Signals
and Systems II
• What you will find:
– Short video tutorials on course material
– Quizzes designed to test your understanding of course material
– Exercises of examples papers
– Forum to interact and ask questions about the course material
• How it will be used:
– Videos and quizzes will be assigned before the lectures
– The lecture will build on top of these assignments by adding more in
depth material in a (hopefully) flipped classroom atmosphere
0.6
EduApp: Interactive lectures
• EduApp can be found at
– hLp://www.eduapp.ethz.ch
• Install the app on your iPhone or Android mobile phone
• Log in using your ETH account – you should automatically
see the SSII course if you are registered.
• What you will find:
– An interactive platform that can be used during the lecture
• How it will be used:
– Questions will be posed during some lectures and example sessions
and students will be asked to contribute answers
– Back channel available where students can ask questions
anonymously
0.7
Albie (Optional, self paced study)
• The experimental platform Albie can be found at
– hLp://www.albie.co
(Yes, that is .co NOT .com)
• Register using an ETH Zurich email address (must end in
ethz.ch). After logging in for the first time, go to search and
join the Signals and Systems course
• What you will find:
– An experimental adaptive learning platform
• How it will be used:
– Optional, not used in assignments during the semester
– Personalized, “non-‐‑linear” content sequence
– Last year many students used during their exam preparation
– Search for content or “trust Albie” to tell you what to look at next
– More: Learning statistics and comments aLached to content
0.8
It’s all for a good cause!
• Please try to make the most of it
– Watch the videos, do the quizzes, come prepared
– Actively participate in the class, work on exercises,
answer questions
– ALend the examples classes where exam style questions
will be answered
– Provide feedback: What works, what does not
0.9
Class content: Dynamical systems
• Describe evolution of variables over time
– Input variables
SS1
– Output variables
– State variables
SS2
• Control:
– Steer systems using inputs
RS1
– Feedback
0.10
From signals to systems
SS1: System maps input
Input System Output
signals to output
signals
D
SS2: Where does
+ x(t) +
Input
u(t)
B
+
C
+ y(t) Output input-output map
A come from?
0.12
Discrete vs continuous
• Discrete à Finite (or countable) values
– {0, 1, 2, 3, …}
– {a, b, c, d}
– {ON, OFF}, {hot, warm, cool, cold}, …
• Continuous à Real values
x ∈R, x ∈R n
–
x ∈[−1,1] ⇒ x ∈R,−1 ≤ x ≤ 1
–
1 , x2 ) ∈R 2 | x12 + x22 ≤ 1}
– {(x
• Hybrid à Part discrete and part continuous
– Airplane + flight management system
– Thermostat + room temperature
0.13
System classification (examples)
Time
Discrete
Continuous
Hybrid
State
Finite state
Queuing
Discrete
machines,
Turing machines
systems
0.14
In this course
• We will concentrate mostly on
– Continuous state
– Continuous time
– Linear systems
• We will also establish a connection to
– Continuous state
– Discrete time
– Linear systems
and to
– Continuous state
– Continuous time
– Nonlinear systems
• Start with examples from many classes of systems
0.15
Course outline: Introductory material
1. Modeling
– Mechanical and electrical systems
– Discrete and continuous time systems
– Discrete and continuous state systems
– Linear and nonlinear (continuous state) systems
2. Revision: ODE and linear algebra
– ODE = Ordinary Differential equations
– Existence and uniqueness of solutions
– Range and null spaces of matrices
– Eigenvalues, eigenvectors, …
0.16
Course outline: Continuous time LTI
3. Time domain
– LTI = Linear Time Invariant
– State space equations
– Time domain solution of state space equations
4. Controllability, observability, energy
5. “Frequency domain”
– Revision of Laplace transforms
– Laplace solution of state space equations
– Stability
– Bode and Nyquist plots
0.17
Course outline: Discrete time LTI and
advanced topics
6. Discrete time LTI systems
– Sampled data systems
– Linear difference equations
– Controllability and observability
– z-‐‑transform
– Simulation, Euler method and its stability
7. Nonlinear systems
– Differences from linear systems
– Multiple equilibria, limit cycles, chaos
– Linearization
– Stability
– Examples
0.18
Notation
• denotes the integers. This is a discrete set
Z
Z = {,−2,−1,0,1,2,}
• denotes the natural numbers
N N = {0,1,2,}
• denotes the complex numbers
C
0.19
Notation
•
R n denotes Euclidean space of dimension n. It is a
finite dimensional vector space (sometimes called
linear space). Special cases:
x ∈R
– n=1, real line, (drop the superscript)
– n=2, real plane,
⎡ x ⎤
x = (x1 , x2 ) = ⎢ 1 ⎥ ∈R 2
⎢ x2 ⎥ ⎡
⎣ ⎦ x1 ⎤
⎢ ⎥
• General n, write x as
⎢ x2 ⎥
ordered list of
x = (x1 , x2 ,, xn ) = ⎢ ⎥ ∈R n
⎢ ⎥
numbers, or vector
⎢ xn ⎥
⎣ ⎦
0.20
Notation
•
R n×m matrices with n rows and m columns, whose
elements are real
⎡ a11 a12 a1m ⎤
⎢ ⎥
⎢ a21 a22 a2m ⎥
A=⎢ = ⎡ a ⎤
⎥ ⎣ ij ⎦ n×m ∈R n×m
⎢ ⎥
⎢ an1 an2 anm ⎥
⎣ ⎦
• Also a vector space, can define “length”, …
R n = R n×1 ,R = R1×1
• Special cases
• Assume familiar with basic matrix operations
(addition, multiplication, eigenvalues)
0.21
x2
Notation
1
{
Exercise: What do the sets x ∈R 2 x1 = 0 or x2 = 0 , }
{ } {
x ∈R 2 x1 ≥ x2 and y ∈R ∃x ∈R, y=x 2 } look like?
0.22
Notation
• Continuous time
à
t ∈R +
• Discrete time
à
k ∈N
• Continuous state
à
x ∈R n
• Continuous input
à
u ∈R m
• Continuous output
à
y ∈R p
• Discrete state
à
q ∈Q
e.g. thermostat
à
q ∈Q = {ON ,OFF}
Continuous State at
time time t
0.24
Linear functions: Euclidean space
• Special case: Linear function
f (i) : R n → R m
x1 , x2 ∈R n , a1 ,a2 ∈R
• For any
∫
− st
U (s) = u(t)e dt
−∞
0.27
Subtle points
• In SSII interested in system response for
positive times
t ∈R +
• Implicitly assume all signals = 0 for
t < 0
• Hence Laplace transform simplifies to
∞ ∞
∫ ∫
− st − st
U (s) = u(t)e dt = u(t)e dt
−∞ 0
t<0
(since u(t)=0 for )
• And convolution simplifies to
∞ t
(u * h)(t) =
−∞
∫ u(τ )h(t − τ ) d τ = ∫ u(τ )h(t − τ ) d τ 0
u(τ ) = 0 for τ < 0 and h(t − τ ) = 0 for τ > t
(since )
0.28
Signal-‐‑ und Systemtheorie II
D-‐‑ITET, Semester 4
Notes 1: Modeling
John Lygeros
1.2
Example 1: Pendulum
• A continuous time, continuous
state, autonomous,
nonlinear system
• Mass m hanging from
weightless string of length l
θ
l
θ
• String makes angle with
downward vertical
• Friction with dissipation
m
constant d
mg
• Determine how the pendulum
is going to move
• i.e. assuming we know where the pendulum is at
θ0
“time” t=0 (θ0) and how fast it is moving ( )
determine where it will be at time t (θ(t))
1.3
Pendulum: Equations of motion
• Evolution of q governed by Newton’s law
mlθ(t) = −dlθ(t) − mg sin θ (t) Exercise: Derive the
differential equation from
Angular Friction Gravity Newton’s laws of motion
acceleration force force
Time Angle
such that
θ (0) = θ 0 θ(0) = θ0
∀t ∈R + , mlθ(t) = −dlθ(t) − mg sin[θ (t)]
1.4
Pendulum: Existence and uniqueness
• Such a function is known as a “solution” or a
“trajectory” of the system
θ 0 , θ0
1. Does a trajectory exist for all ?
θ 0 , θ0
2. Is there a unique trajectory for each ?
3. Can we find this trajectory?
• Clearly important questions for differential
equations used to model physical systems
• In general answer to questions may be “no”
• In fact, answer to question 3 usually is “no”!
• However, we can usually approximate the
trajectory by computer simulation
1.5
Pendulum: MATLAB simulation
l = 1, m = 1, d = 1, g = 9.8, θ 0 = 0.75, θ0 = 0
1.6
Pendulum: State space description
• Convenient to write ODE more compactly
= f (x(t)), x(t) ∈R n
x(t)
t =0
x1 x1 (t)
1.9
Example 2: RLC circuit
• Continuous time, continuous state, linear system
• Input voltage v1(t) (not autonomous)
• Determine evolution of voltages and currents
vR (t) v L (t)
+ - + - iL (t)
+ R L
+
v1 (t) C vC (t)
-
-
1.10
RLC circuit: Equations of “motion”
• From Kirchoff’s laws + element equations
• E.g.
dvC (t)
C = iL (t)
dt
diL (t) d 2 vc (t) R dvc (t) 1 1
L = v L (t) + + vc (t) = v1 (t)
dt dt 2 L dt LC LC
vR (t) = RiL (t)
v L (t) = v1 (t) − vR (t) − vC (t)
• Solution to ODE gives vC(t)
• All other voltages and currents can be
computed from vC(t)
1.11
RLC circuit: MATLAB simulation
R = 10 L =1
C = 0.01 x0 = 0
1.12
RLC circuit: State space description
• Try to write first order vector ODE
x(t) = f (x(t)), x(t) ∈R n
1.13
RLC circuit: State space description
• Relate state (x(t)) and input (u(t))
dvC (t) 1
C = iL (t) ⇒ x1 (t) = x2 (t)
dt C
diL (t) 1 R 1
L = v1 (t) − vR (t) − vC (t) ⇒ x2 (t) = u(t) − x2 (t) − x1 (t)
dt L L L
• In matrix form
⎡ 1 ⎤
⎢ 0 ⎥ ⎡ 0 ⎤
x(t)= ⎢ C ⎥ x(t) + ⎢ 1 ⎥ u(t)
⎢ −1 −R ⎥ ⎢ ⎥
⎢ ⎥ ⎢⎣ L ⎥⎦
⎣ L L ⎦
1.14
RLC circuit: State space description
• Have wriben ODE of the form
Exercise: What
x(t)=Ax(t) + Bu(t) = f (x(t),u(t)) are the matrices
A and B?
• Similarities to pendulum
– 2nd order ODE à two states
– States related to energy stored in system
• Differences from pendulum
– External source of energy à input u(t)
à system no longer “autonomous”
– Function f(x,u) linear in x and u
à dynamics described by linear ODE
1.15
Example 3: Amplifier circuit
• Continuous time, continuous state linear system
• Input voltage v1(t)
vC (t)
• Output voltage v0(t)
iC0 (t) + -
0
C0
iR (t)
0
vC (t) R0
i1 R1
+
1
- iin
-
+ +
C1 vin
-
+
iout +
v1 (t)
v0 (t)
-
-
1.16
Reminder: Operational amplifier (OpAmp)
Input Input
current Resistance Output
iin (large) resistance
à- (small)
Input + Rout
voltage Rin Output
- iout + current
vin +
µ vin Output
+ vout voltage
- -
Gain
(large)
External voltage source (not shown) provides
energy for gain
1.17
Reminder: Ideal OpAmp
• Assume
Rin ≈ ∞ iin ≈ 0
Rout ≈ 0 iout independent of vout
µ≈∞ vin ≈ 0 if vout finite
• “Virtual earth assumption”
• Makes circuit analysis much easier
• Note that
– Input power iinvin=0
– Output power ioutvout is arbitrary
• Necessary energy comes from external voltage source
(not shown!)
1.18
Amplifier circuit: Equations of motion
• Assuming OpAmp is ideal
dvC (t)
C1 1
= i1 (t) dvC (t) vC (t)
v1 (t)
dt 1
=− 1
+
v1 (t) − vC (t) dt R1C1 R1C1
vin ≈ 0 ⇒ i1 (t) = 1
R1
dvc (t)
C0 0
= ic (t) dvC (t) vC (t) vC (t)
v1 (t)
dt 0 0
=− −0
+ 1
1.19
Amplifier circuit: State space description
• First order ODE in vector variables
• From our experience so far we would expect
– Two state variables
– Voltages of capacitors, x1(t)=vC1(t), x2(t)=vC0(t)
– One input variable, u(t)=v1(t)
– One output variable, y(t)=v0(t)
• Write equations that relate input, states and output
⎡ 1 ⎤ ⎡ 1 ⎤
⎢ − 0 ⎥ ⎢ ⎥
dx(t) ⎢ R1C1 ⎥ ⎢ R1C1 ⎥
=⎢ ⎥ x(t) + ⎢ 1 ⎥ u(t)
dt
⎢ − 1 −
1 ⎥ ⎢ ⎥
⎢ R1C0 R0C0 ⎥ ⎢ R1C0 ⎥
⎣ ⎦ ⎣ ⎦
y(t) = ⎡ 0 −1 ⎤ x(t)
⎣ ⎦
1.20
Amplifier circuit: State space description
• Have wriben in the form
x(t)=Ax(t) + Bu(t) = f (x(t),u(t))
y(t) = Cx(t) + Du(t) = h(x(t),u(t))
1.21
Amplifier circuit: Simulation
1 1 1 ⎡ C 0 ⎤
E(t) = C1vC2 (t) + C0 vC2 (t) = x(t)T ⎢ 1 ⎥ x(t)
2 1
2 0
2 ⎢ 0 C0 ⎥
⎣ ⎦
• Quadratic function of the state
1
E(t) = x(t)T Qx(t)
2
2
)
1
2
(
T
) 1
(
= x(t) A Q + QA x(t) + u(t)T BT Qx(t) + x(t)T QBu(t)
T
2
)
• Quadratic in state and input
• If there is no input (u(t)=0)
1
( )
P(t) = x(t)T AT Q + QA x(t)
2
1.24
Amplifier circuit: Power (u(t)=0)
⎛⎡ 1 1 ⎤ ⎡ 1 ⎤⎞
⎜⎢ − − ⎥⎡ ⎤ ⎡ ⎤ ⎢ − 0 ⎥⎟
1 T ⎜⎢
R1C1 R1C0 ⎥ C1 0 C1 0 ⎢ R1C1 ⎥⎟
P(t) = x(t) ⎜ ⎢ ⎢ ⎥ + ⎢ ⎥ x(t)
2 ⎥
1 ⎥ ⎢ 0 C0 ⎥ ⎢ 0 C0 ⎥ ⎢ ⎢ 1 1 ⎥ ⎥ ⎟
⎜⎢ 0 − ⎣ ⎦ ⎣ ⎦ − − ⎟
⎜⎝ ⎢ R0C0 ⎥ ⎢ R1C0 R0C0 ⎥⎟⎠
⎣ ⎦ ⎣ ⎦
⎡ 2 1 ⎤
⎢ − − ⎥ Exercise: Derive this
1 T ⎢
R1 R1 ⎥
= x(t) ⎢ ⎥ x(t) equation directly by
2
⎢ −1 − 2 ⎥ differentiating the energy
⎢ R1 R0 ⎥
⎣ ⎦ of the circuit
x1 (t) 2 x1 (t)x2 (t) x2 (t) 2
1 1
⇒ P(t) = − − − E(t) = C1vC (t) + C0 vC2 (t)
2
R1 R1 R0
2 1
2 0
1.25
Population dynamics
• A discrete time, continuous state system
• Experiment:
– Closed jar containing a number (N) of fruit flies
– Constant food supply
• Question: How does fly population evolve?
• Number of flies limited by available food, epidemics
– Few flies à abundance of space/food à more flies
– Many flies à competition for space/food à fewer flies
• Maximum number “ecosystem” can support NMAX
• State: Normalized population
N
x= ∈ [0,1]
N MAX
1.26
Population dynamics: State space model
• Track x from generation to generation: xk population at
generation k
• How does population at generation k+1 depend on xk?
• Classic model: Logistic map
1.27
Population dynamics: Solution
• r represents the “food” supply
– Large r means a lot of food is provided
– Small r means lible food is provided
• Shape of f(x) reflects population trends
– Small population now à small population next
generation (not enough individuals to breed)
– Large population now à small population next
generation (food/living space shortage, epidemics, etc.)
• How does the population change in time?
• This depends a lot on r
xk
1. If then decays to 0 (i.e. all flies die)
r ∈[0,1)
xk
2. If then tends to a steady state value (i.e. the
r ∈[1,3)
fly population stabilizes)
3. If then tends to a 2-‐‑
r ∈[3,1+ 6) xk periodic solution
(i.e. the population alternates between two values)
4. Eventually chaotic behavior!
1.28
Population dynamics: Simulation
1.29
Manufacturing system
• A discrete time, discrete state system
• Model of a machine in a manufacturing shop
• Machine can be in three states
– Idle (I)
– Working (W)
– Broken (D)
• State changes when certain “events” happen
– A part arrives and starts gebing processed (p)
– The processing is completed and the part moves on to the
next machine (c)
– The machine fails (f)
– The machine is repaired (r)
• Finite number of states and inputs:
– Finite State machine or
– Finite Automaton
1.30
Manufacturing system: State space model
• Possible states of the machine
q ∈Q = {I,W , D}
• Possible inputs (events)
σ ∈Σ = { p,c, f ,r}
• Not all events are possible for all states, e.g.
– A part cannot start gebing processed (σ=p) while the
machine is broken (q=D)
– The machine can only be repaired (σ=r) when broken (q=D)
• Transition function
δ : Q × Σ → Q
Exercise: Is δ
• Write as discrete time system
linear or nonlinear?
qk +1 = δ (qk ,σ k ) Does the question
even make sense?
1.31
Manufacturing system: Automaton
δ (I, p) = W
Exercise: Q has n elements
δ (W ,c) = I and Σ has m elements, how
δ (W , f ) = D many lines are needed (at
most) to define δ?
δ (D,r) = I
• All other combinations
Initial
not allowed I
state
• Assume we start at I
• Easier to represent by a graph p r
Exercise: If graph has e c
arrows, how many lines W
D
are needed to define δ?
f
1.32
Manufacturing system: Solution
• Assume initially q0=I. What are the sequences of
events the machine can experience?
• Some sequences are possible while others are not
– pcp → possible
– ppc → impossible
• The set of all acceptable sequences is called the
language of the automaton
• The following are OK
Arbitrary no. Nothing
– Arbitrary number of pc
of repetitions
denoted by (pc)*
– Arbitrary number of pfr
( p ⋅ c + p ⋅ f ⋅ r)* ⋅ (1+ p + p ⋅ f )
denoted by (pfr)*
– Possibly followed by
a p or pf
OR Followed by
1.33
Thermostat
• A continuous time, hybrid system
• Thermostat is trying to keep the temperature of a
room at around 20 degrees
– Turn heater on if temperature ≤ 19 degrees.
– Turn heater off if temperature ≥ 21 degrees.
• Due to uncertainty in the radiator dynamics, the
temperature may fall further, to 18 degrees, or rise
further, to 22 degrees
• Both a continuous and a discrete state
– Discrete state: Heater
q(t) ∈Q = {ON ,OFF}
– Continuous state: Room temperature
x(t) ∈R
• Evolution in continuous time
• No external input (autonomous system)
1.34
Thermostat: State space model
• Different differential equations
for x, depending on ON or OFF
– Heater OFF: Temperature decreases
exponentially toward 0
= −α x(t)
x(t) Exercise: Solve the
differential equations
– Heater ON: Temperature increases
to verify exponential
exponentially towards 30
increase/decrease.
= −α (x(t) − 30)
x(t)
• Heater changes from ON to OFF
and back depending on x(t)
• Natural to describe by mixture of differential equation
and graph notation
1.35
Thermostat: Hybrid automaton
Initial Can go ON
state provided that …
(OFF,20)
OFF
x(t) ≤ 19 ON
x(0) = 20 = −ax(t)
x(t) = −a(x(t) − 30)
x(t)
x(t) ≥ 18 x(t) ≤ 22
x(t) ≥ 21
1.36
Thermostat: Solutions
q(t) x(t)
ON
OFF
1.37
Continuous modeling: Generic steps
1. Identify input variables
– Usually obvious
– Quantities that come from the outside
– Say m such input variables
– u(t) ∈R m
Stack them in vector form, denote by
2. Identify output variables
– Usually obvious
– Quantities that can be measured
– Say p such quantities
– y(t) ∈R p
Stack them in vector form, denote by
1.38
Continuous modeling: Generic steps
3. Select state variables
– Need to encapsulate the past
– Need (together with inputs) to determine future
– For physical systems often related to energy storage
– For mechanical systems can usually select positions
(q(t)) and velocities (v(t))
– For electrical circuits can usually select capacitor
voltages (vC(t))and inductor currents (iL(t))
– Other choices possible, may lead to simpler models
– Say n such variables
– Stack them in vector form, denote by
x(t) ∈R n
1.39
Continuous modeling: Generic steps
4. Take derivatives of states
– Try to obtain n equations with derivative of one
state on the left hand side and a function of the
states and inputs on the right hand side
– For mechanical systems
• = v(t)
q(t)
Position derivatives easy,
• Velocity derivatives (=accelerations) from Newton law
– For electrical circuits
dvC (t) diL (t)
• Current/voltage relations
C = iC (t), L = v L (t)
dt dt
• Relate to each other by Kirchoff’s laws
– = f (x(t),u(t))
Result: Vector differential equation
x(t)
1.40
Continuous modeling: Generic steps
5. Write output variables as a function of state
and input variables
– Usually easy
– Result: Vector algebraic equation
y(t) = h(x(t),u(t))
6. Determine whether the system is linear, etc.
– Are the functions f and h linear or not?
Disclaimer:
– Generic steps seem easy, but require creativity!
– Mathematical models never the same as reality
– With any luck, close enough to be useful
1.41
Signal-‐‑ und Systemtheorie II
D-‐‑ITET, Semester 4
Notes 2: Revision of ODE and
linear algebra
John Lygeros
2.2
Assumed to be known
• Matrix product, compatible dimensions
– Associative:
(AB)C = A(BC)
– Distributive with respect to addition:
A(B + C) = AB + AC
– Non-‐‑commutative:
AB ≠ BA in general
• Transpose of a matrix
–
(AB)T = BT AT
• For square matrices
– Identity matrix
AI = IA = A
– In every dimension there exists a unique identity matrix
−1 −1
– Inverse matrix
A A = AA = I
– May not always exist
– When it does it is unique
– Computation of the determinant and its basic properties
2.3
The 2-‐‑norm
Definition: The 2-norm is a Fact 2.1: For all x, y ∈R n
, a ∈R
function i : R n → R that to each
1. x ≥ 0 and x = 0 if & only if x = 0
x ∈R n assigns a real number
n 2. ax = a ⋅ x
x = ∑i
x
i=1
2
3. x + y ≤ x + y
∑
x, y = x y = x T y 3. x, x = x
i i
i=1
Exercise: Prove these. For fixed
Definition: Two vectors x, y ∈R n
y, is the function y,i linear?
are called orthogonal if x, y = 0.
• Orthogonal: Meet at right angles
A set of vectors, x1 , x2 ,…, xm ∈R n
• Orthonormal: Pairwise orthogonal
are called orthonormal if
and unit length
⎧⎪ 0 if i ≠ j
xi , x j =⎨ Exercise: Are the vectors ⎡ 0 ⎤,⎡ 1 ⎤
⎪⎩ 1 if i = j
⎢ ⎥ ⎢ ⎥
2 ⎦ ⎣ 0 ⎦
orthogonal? Orthonormal? ⎣
2.5
Linear independence
Definition: A set of vectors {x1 , x2 ,xm } ∈R n is called linearly
independent if for a1 ,a2 ,am ∈R
a1 x1 + a2 x2 + + am xm = 0 ⇔ a1 = a2 = = am = 0
• Linearly dependent if and only if at least one is a
linear combination of the rest. E.g. if
a1 ≠ 0
a2 am
a1 x1 + a2 x2 + + am xm = 0 ⇒ x1 = − x2 − − xm
a1 a1
Fact 2.3: There exists a set with n linearly Exercise: What is a set of
independent vectors in R n ,but any set with n linearly independent
more than n vectors is linearly dependent. vectors of R n ?
2.6
Subspaces
Definition: A set of vectors S ⊆ R n is called a subspace of R n
if for all x, y ∈S and a,b ∈R, we have that ax + by ∈S.
2.7
Basis of a subspace
{
Definition: The span of x1 , x2 ,…, xm ⊂ R is
n
} Exercise: Show span
set of all linear combinations of these vectors is a subspace
{
• In fact, span = smallest subspace containing
x1 , x2 ,…, xm }
{ }
Definition: A set of vectors x1 , x2 ,…, xm ⊂ R is called a
n
2.11
Inverse of a square matrix
If A is invertible
Exercise: Show that
−1 adj(A) Adjoint matrix= matrix ⎡ d −b ⎤
A = of subdeterminants −1 ⎢ ⎥
det(A) transposed
⎡ a b ⎤ ⎣ −c a ⎦
⎢ ⎥ =
⎣ c d ⎦ ad − bc
x ∈R unknown
m
• m=n unique solutions if and only if A invertible (Fact 2.11)
• If A singular system will have either no solutions, or infinite
number of solutions
• n>m à equations > unknowns à generally no solution
( )
−1
Fact 2.14: If A has rank m then x = A A
T
AT y is the
unique minimizer of Ax − y
• n<m à unknowns > equations à generally infinite solutions
Fact 2.15: If A has rank n then the system has infinitely many
( )
−1
solutions. The one with the minimum norm is x = A AA
T T
y
2.13
Orthogonal matrices
Fact 2.16: A is orthogonal if
and only if its columns are
Definition: A matrix is called ortho-normal. The product of
orthogonal if AAT = AT A = I orthogonal matrices is
orthogonal. If A is orthogonal
then Ax = x
2.14
Eigenvalues and eigenvectors
Definition: A (nonzero) vector w ∈C n is called an eigenvector of
a matrix A ∈R if there exists a number λ ∈C such that
n×n
2.16
Diagonalizable matrices
⎡ λ1 0 . . 0 ⎤
⎢ ⎥
⎢ 0 λ2 . . 0 ⎥
⎢ ⎥
Awi = λi wi ⇒ A[ w1 w2 ... wn ] = [ w1 w2 ... wn ] ⎢ . . . . . ⎥
⎢ . . . . . ⎥
W ∈C n×n ⎢ ⎥
W
⎢ 0 0 . . λn ⎥
⎣ ⎦
Λ ∈C n×n
Definition: A is called
diagonalizable if W is invertible ⇒ A = W ΛW −1
2.17
Symmetric, positive definite and
positive semi-‐‑definite matrices
Definition: A matrix is called Definition: A symmetric matrix
symmetric if A = AT is called positive definite if
2.19
State Space: Inputs, outputs and states
• Mathematical model of physical system described by
u1 ,u2 ,....,um ∈R
– Input variables (denoted by )
y1 , y2 ,...., y p ∈R
– Output variables (denoted by )
x1 , x2 ,...., xn ∈R
– State variables (denoted by )
• All inputs states and outputs are real numbers
• Usually write more compactly as vectors
⎡ u1 ⎤ ⎡ y1 ⎤ ⎡ x1 ⎤
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ Exercise: Which of the
⎢ u2 ⎥ ⎢ y2 ⎥ ⎢ x2 ⎥ examples in Notes 1 can be
u=⎢ ⎥ ∈ m
y=⎢ ⎥ ∈
p
x=⎢ ⎥ ∈
n
2.20
State space: Dynamics
• Dynamics of process imply relations between variables
– Differential equations giving evolution of states as a function of
the states, inputs and possibly time, i.e. we have functions
d
f i (i) : R × R × R + → R,
n m
xi (t) = f i (x(t),u(t),t), i = 1,,n
dt
– Algebraic equations giving the outputs as a function of the
states, inputs and possibly time, i.e. we have functions
2.21
In vector form
• Usually write more compactly be defining
f (i) : R n × R m × R + → R n h(i) : R n × R m × R + → R p
⎡ f (x,u,t) ⎤ ⎡ h (x,u,t) ⎤
⎢ 1
⎥ ⎢ 1 ⎥
f (x,u,t) = ⎢ ⎥ h(x,u,t) = ⎢ ⎥
⎢ f (x,u,t) ⎥ ⎢ h (x,u,t) ⎥
⎢⎣ n ⎥⎦ ⎢⎣ p ⎥⎦
• Then state, input and output vectors are linked by
d
x(t) = f (x(t),u(t),t) Exercise: What are the
dt functions f for the pendulum,
y(t) = h(x(t),u(t),t) RLC and opamp examples of
Notes 1? What are the
• State space form
dimensions of these systems?
• f called the vector field
2.22
Linear and autonomous systems
Definition: A system in state space form Exercise: Which of the
is called time invariant if its dynamics do systems considered in
not depend explicitly on time Notes 1 are autonomous?
= f (x(t),u(t)), y(t) = h(x(t),u(t))
x(t) Which are time invariant?
2.23
Higher order differential equations
• Often “laws of nature” expressed in terms of higher
order differential equations
– For example, Newton’s law à second order ODE
• These can be converted to state space form by
defining lower order derivatives (all except the
highest) as states
2.24
Time invariant systems
• The explicit time dependence can be eliminated by
introducing time as an additional state with
t = 1
• The result is a time invariant system of dimension n+1
Exercise: Convert the following time varying system
= f (x(t),u(t),t), x ∈R n
x(t)
y(t) = h(x(t),u(t),t)
of dimension n into a time invariant system of dimension n+1.
Exercise: Repeat for the linear time varying system
= A(t)x(t) + B(t)u(t), x ∈R n
x(t)
y(t) = C(t)x(t) + D(t)u(t)
Is the resulting time invariant system linear?
2.25
Coordinate transformation
• What happens if we perform a change of coordinates for
the state vector?
• Restrict abention to time invariant linear systems
= Ax(t) + Bu(t)
x(t)
y(t) = Cx(t) + Du(t)
and linear changes of coordinates
x̂(t) = Tx(t), T ∈R n×n ,det(T ) ≠ 0
• In the new coordinates we get another linear system
̂ = TAT −1 x̂(t) + TBu(t)
x(t)
y(t) = CT −1 x̂(t) + Du(t)
• Could be useful, transformed system may be simpler
2.26
Solution of state space equations
• Only autonomous systems for the time being
= f (x(t)),
x(t) y(t) = h(x(t))
• Non-‐‑autonomous systems essentially the same,
formal mathematics more complicated
• What is the “solution” of the system?
x(t0 ) = x0 ∈R ,
– Where do we start? Say
n
at time t0 ∈R
t1 ≥ t0
– How long do we go? Say until some
• Would like to find functions
2.27
Solution of state space equations
Definition: A pair of functions x(i) : ⎡⎣t0 ,t1 ⎤⎦ → R n , y(i) : ⎡⎣t0 ,t1 ⎤⎦ → R p
is a solution of the state space system over the interval ⎡⎣t0 ,t1 ⎤⎦
starting at x0 ∈R n if
1. x(t0 ) = x0 = f (x(t)), ∀t ∈ ⎡⎣t0 ,t1 ⎤⎦
2. x(t)
3. y(t) = h(x(t)), ∀t ∈ ⎡⎣t0 ,t1 ⎤⎦
• Note that x(.) implicitly defines y(.)
• Therefore the difficulty is finding the solution to the
differential equation
Exercise: Show that x(t) is a solution
• Because the system is
over the interval [0, T] if and only if
autonomous the starting
x(t-‐‑t0) is a solution over the interval
[t0, t0+T] starting at the same initial
time is also unimportant
state.
2.28
Existence and uniqueness of solutions
• For autonomous systems the “only” thing we need to
do is, given find a function
f (i) : R n → R n , x0 ∈R n ,T ≥ 0,
x(i)
: ⎡⎣0,T ⎤⎦
such that
→ Rn Exercise: Does the
function x(.) have
x(0) = x0 and x(t) = f (x(t)), ∀t ∈ ⎡⎣0,T ⎤⎦ to be continuous?
Differentiable?
• Does such a function exist for some T?
• Is it unique, or can there be more than one?
• Do such functions exist for all T?
• Can we compute them even if they do?
• Clearly all these are important for physical models
• Unfortunately answer is sometimes “no”
2.29
Examples
• Illustrate potential problems on 1 dimensional systems
• No solutions: Consider the system
starting at x0=0. The system has no solution for any T
• Many solutions: Consider the system
= 3x(t) 2 3 , x0 = 0.
x(t)
For any a ≥ 0 the following is a solution for the system
⎧⎪ (t − a)3 if t ≥ a Exercise: Prove this is the
x(t) = ⎨ case.
⎪⎩ 0 if t < a
2.30
Examples
• No solutions for T large: Consider the system
= 1+ x(t) 2 , x0 = 0.
x(t)
2.31
LipschiQ functions
• λ is called the Lipschih
Exercise: Show that any
constant of f
other λ’ ≥ λ is also a
• Lipschih functions are
Lipschih constant
continuous but not necessarily
differentiable
Exercise: Show that
f(x)=|x| (x is a real number)
• All differentiable functions
is Lipschih. What is the
with bounded derivatives
Lipschih constant? Is the
are Lipschih
function differentiable?
• Linear functions are Lipschih
Exercise: Show that f(x)=Ax
Exercise: Show that the f(x) is Lipschih. What is a
in the three pathological Lipschih constant? Is the
examples in p. 2.30-‐‑2.31 are function differentiable?
not Lipschih.
2.32
Existence and uniqueness
2.33
Continuity
• Even if a unique solution exists, this does not mean
we can find it
• Sometimes we can: See the pathological examples
above and linear systems (Notes 3)
• Usually have to resort to simulation on computer
• Construct approximate numerical solution
• It helps if solutions that start close remain close
Theorem 2.3: If f is Lipschih then the solutions starting at
x0 , x̂0 ∈ n t≥0
are such that for all
x(t) − x̂(t) ≤ e λt x0 − x̂0
x(t0 ) = x0 and x(t) = f (x(t),u(t),t), ∀t ∈ ⎡⎣t0 ,t1 ⎤⎦
• This is OK if f is continuous in u
Exercise: What goes wrong
and t and u(.) is continuous in t
in the case of discontinuity?
• Unfortunately discontinuous u(.) are quite common
• Fortunately there is a fix, but the math is harder
• Roughly speaking need
– f(x,u,t) Lipschih in x, continuous in u and t
– u(t) continuous for “almost all” t
2.35
Signal-‐‑ und Systemtheorie II
D-‐‑ITET, Semester 4
Notes 3: Continuous LTI
systems in time domain
John Lygeros
y(t) = h(x(t),u(t),t) ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ um ⎥ ⎢ yp ⎥ ⎢ xn ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
Input vector Output vector State vector
= Ax(t) + Bu(t)
x(t) A ∈ n×n B ∈ n×m
y(t) = Cx(t) + Du(t)
C ∈ p×n D ∈ p×m
3.2
Block diagram representation
.x(t)
+ x(t) +
B +
+ C ++
u(t) y(t)
3.3
LTI systems in state space form
⎡ a11 a12 ... a1n ⎤
For LTI systems state space equations
⎢ ⎥
⎢ a21 a22 ... a2n ⎥
– n coupled, first order, linear differential equations
A=⎢ ⎥
– p linear algebraic equations
⎢ ⎥
⎢ an1 an2 ... ann ⎥
– Time invariant coefficients
⎣ ⎦
x1 (t) = a11 x1 (t) + ... + a1n xn (t) + b11u1 (t) + ... + b1mum (t) ⎡ b ... b1m ⎤
⎢ 11
⎥
B=⎢ ⎥
x2 (t) = a21 x1 (t) + ... + a2n xn (t) + b21u1 (t) + ... + b2mum (t) ⎢ b ⎥
... b
⎢⎣ n1 nm ⎥
⎦
⎡ c ... c1n ⎤
xn (t) = an1 x1 (t) + ... + ann xn (t) + bn1u1 (t) + ... + bnmum (t) ⎢ 11
⎥
C=⎢ ⎥
y1 (t) = c11 x1 (t) + ... + c1n xn (t) + d11u1 (t) + ... + d1mum (t) ⎢ c ... c ⎥
⎢⎣ p1 pn ⎥
⎦
y2 (t) = c21 x1 (t) + ... + c2n xn (t) + d 21u1 (t) + ... + d 2mum (t) ⎡ d ... d1m ⎤
⎢ 11
⎥
D=⎢ ⎥
⎢ d ... d pm ⎥
y p (t) = c p1 x1 (t) + ... + c pn xn (t) + d p1u1 (t) + ... + d pmum (t) ⎢⎣ p1 ⎥⎦
3.4
Examples
RLC Circuit
Amplifier Circuit
dv (t) dvC (t) vC (t) v1 (t)
C C = iC (t) = iL (t)
1
=− 1
+
dt dt R1C1 R1C1
di (t) dvC (t) vC (t) vC (t) v1 (t)
L L = v L (t) = v1 (t) − vR (t) − vC (t) 0
=− 0
− 1
+
dt dt R0C0 R1C0 R1C0
⎡ 1 ⎤
⎡
⎢ −
1
0
⎤
⎥
⎡ 1
⎢
⎤
⎥
⎢ 0 ⎥ ⎡ 0 ⎤ ⎢ R C ⎥ ⎢ R1C1 ⎥
=⎢ +
1 1
x(t)= ⎢ C ⎥ x(t) + ⎢ ⎥
u(t)
x(t) ⎥ x(t) ⎢ 1 ⎥ u(t)
⎢ 1 ⎥ ⎢ − 1 −
1 ⎥ ⎢ ⎥
⎢ 1 R ⎥
⎢ R1C0 R0C0 ⎥ ⎢ R1C0 ⎥
⎢ − − ⎥ ⎢⎣ L ⎥⎦ ⎣ ⎦ ⎣ ⎦
⎣ L L ⎦
y(t) = ⎡ 0 −1 ⎤ x(t)
⎣ ⎦
⎡ x2 (t) ⎤ nonlinear
⎢ ⎥
=⎢ d
cf. pendulum x(t) g ⎥
⎢ − m x2 (t) − l sin x1 (t) ⎥
⎣ ⎦
3.5
System solution
• Since system is time invariant, assume we are given
– Initial condition
x0 ∈R
n
3.6
State solution
The system solution is
t
x(t) = Φ(t)x0 + ∫ Φ(t − τ )Bu(τ ) d τ
0
where
2 2 k k
A t A t
Φ(t) = e = I + At +
At
+ ... + + ... ∈R n×n
2! k!
State transition
matrix
and the integral is computed element by element
2 2
a t
e = 1+ at + + ... if a ∈
(cf. Taylor series expansion: )
at
2!
3.7
Output solution
Simply combine state solution
t
x(t) = Φ(t)x0 + ∫ Φ(t − τ )Bu(τ ) d τ
0
3.8
Transition matrix properties
3.9
Proof of solution formula (sketch)
• Clearly
0
x(0) = Φ(0)x0 + ∫ Φ(0 − τ )Bu(τ ) d τ = x0
0
3.10
Example: RC circuit
+ _
• Inputs:
u(t) = vs (t) Voltage
• States:
x(t) = vC (t) R
Input + iC(t)
• Initial condition:
x0 = vC (0) +
- vC(t) _ C
• State space equations
vS (t)
1 1
=−
x(t) x(t) + u(t)
RC RC Exercise: Derive the
• Response to step with amplitude 1V
state space equations.
What are the
−
t
⎛ −
t
⎞ “matrices” A and B?
x(t) = e RC
x0 + ⎜ 1− e RC ⎟
⎝ ⎠ Exercise: Derive the
step response
3.11
State solution structure
The solution consists of two parts:
t
x(t) = Φ(t)x0 + ∫ Φ(t − τ )Bu(τ ) d τ
0
Zero Zero
Total
transition
= Input + State
transition transition
3.12
Superposition principle
• ZST linear in input trajectory
• ZST under input
u1 (⋅) : [0,t] → m
t
x1 (t) = ∫ Φ(t − τ )Bu1 (τ ) d τ
0
• ZST under input
u2 (⋅) : [0,t] → m
t
x2 (t) = ∫ Φ(t − τ )Bu2 (τ ) d τ
0
• ZST under input for
u(τ ) = a1u1 (τ ) + a2u2 (τ ) τ ∈[0,t]
t
a1 ,a2 ∈, x(t) = ∫ Φ(t − τ )Bu(τ ) d τ
0
t
= ∫ Φ(t − τ )B(a1u1 (τ ) + a2u2 (τ )) d τ
0
= a1 x1 (t) + a2 x2 (t)
3.13
Output solution structure
The solution consists of two parts:
t
y(t) = CΦ(t)x0 + C ∫ Φ(t − τ )Bu(τ ) d τ + Du(t)
0
Zero Zero
Total
Response
= Input + State
Response Response
3.14
Zero input transition
• If we know the state transition matrix we can (in
principle) compute all solutions of linear system
• Given matrix A would like to compute
k k
A t
Φ(t) = e = I + At +… +
At
+…
k!
• Many ways of doing this
– Summing infinite series (in some rare cases!)
– Using eigenvalues and eigenvectors (this set of notes)
– Using the Laplace transform (later)
– Numerically (later)
• Using eigenvalues at least two methods
– Using Cayley Hamilton Theorem (Theorem 2.1)
– Using eigenvalue decomposition (used here)
3.15
E-‐‑values and E-‐‑vectors: Rough idea
Awi = λi wi
• Recall that (p. 2.15)
• ZIT
= Ax(t) ⇒ x(t) = Φ(t)x(0)
x(t)
x(0) = wi = Awi = λi wi
⇒ x(0)
• i.e. if we start on e-‐‑vector we stay on e-‐‑vector
• x(t)
increases/decreases depending on sign of λ
• E.g.
x2
w2 w1
x (0)
n = 2, λ1 < 0, λ2 > 0
x(0) x (0)
x1
3.16
Transition matrix computation
• Change of coordinates using matrix of eigenvectors
• Assume matrix diagonalizable (p.2.17)
AW = W Λ ⇒ A = W ΛW −1 Exercise: Prove
by induction that
Matrix of
• Therefore (Fact 2.18)
Ak = W Λ kW −1
e-vectors
Λt −1
Φ(t) = e = We W
At
3.19
RLC circuit: ZIT
x(0) = w1 ⇒ x(t) = Φ(t) ⎡ 2 ⎤ = e −t ⎡ 2 ⎤
⎣ −1 ⎦ ⎣ −1 ⎦
= ⎡ −2e −t
+ e −2t
+ 1 ⎤V ⎯t→∞
⎯⎯ →⎡ V ⎤
⎢⎣ e − e
−t −2t ⎥⎦ ⎣0⎦
3.21
Notes: Diagonalizable matrices
• For diagonalizable matrices, state transition matrix linear
combination of terms of the form
eλit
• Generally
λi = σ ± jω ∈C,σ ,ω ∈
• So ZIT linear combination of terms of the form
if λi = 0 (σ = 0,ω = 0)
– 1
eσ t if λi = σ (σ ≠ 0,ω = 0)
–
ω t)
– sin( and cos(ω t) if λi = ± jω (σ = 0,ω ≠ 0)
σt σt
sin(ω t) and e cos(ω t) if λi = σ ± jω (σ ≠ 0,ω ≠ 0)
– e
λi = σ ± jω
• Part of ZIT corresponding to
σ = 0,ω = 0
– Constant if
– Converges to 0 if
σ < 0
– Periodic if
σ = 0,ω ≠ 0
– Goes to infinity if
σ > 0
3.22
Typical ZIT for diagonalizable matrices
Im(λ)
Re(λ)
3.23
Stability: Zero input transition
• Consider the system
x(t) = Ax(t) + Bu(t)
• Let x(t) be its ZIT
x(t) = Φ(t)x0 = e At x0
• i.e. if the state starts small it stays small (p. 2.4)
• or you can keep the state as close as you want to
0 by starting close enough
3.24
Asymptotic stability
Definition: The system is called asymptotically stable
if it is stable and in addition
x(t) → 0 as t → ∞
• i.e. not only do we stay close to 0 but also
converge to it
Exercise: Show that
x(t) → 0
if and only if
x(t) → 0
• Note that
– Definitions do not require diagonalizable matrices
– In fact we, will see that they also work for
nonlinear systems (Notes 7)
3.25
Diagonalizable matrices
Theorem 3.1: System with diagonalizable A matrix is:
• Stable if and only if Re[λi ] ≤ 0,∀i
• Asymptotically stable if and only if Re[λi ] < 0 ∀i
• Unstable if and only if ∃i : Re[ λi ] > 0
w1
Stable
node
w2
3.27
Phase plane plots
w1
Saddle
point
w2
w1
Unstable
node
w2
3.28
Phase plane plots
Stable
focus
Center
3.29
Phase plane plots
Unstable
focus
+
+ +
+
+
+
+ +
+ +
+
3.30
Non-‐‑diagonalizable matrices (examples)
Exercise: What are the
A1 = ⎡ 0 1 ⎤ ⇒ e 1 = ⎡ 1 t ⎤
At eigenvalues of A1 and A2?
⎣0 0⎦ ⎣0 1⎦ What are the eigenvectors?
What goes wrong with their
diagonalization?
A2 = ⎡ −1 1 ⎤ A2t
⇒e = ⎡ e −t
te −t
⎤
⎣ 0 −1 ⎦ ⎢⎣ 0 e −t ⎥⎦ Exercise: Prove the formulas
for the transition matrices
3.31
Non-‐‑diagonalizable matrices (general)
• Distinct e-‐‑values à matrix diagonalizable (Fact 2.18)
λ1 = λ2 = … = λr = σ ± jω
• Assume some e-‐‑value repeated r times,
• In general, ZIT linear combination of terms of the form
– 1,t,t
2 ,…,t r-1 if σ = 0,ω = 0
σt σt r-1 σ t
– e
,te ,…,t e if σ ≠ 0,ω = 0
–
sin( ω t),cos(ω t),t sin(ω t),…,t r−1
cos(ω t) if σ = 0,ω ≠ 0
– e
σt
sin(ω t),eσ t cos(ω t),…,t r−1eσ t cos(ω t) if σ ≠ 0,ω ≠ 0
• Can be shown using generalized eigenvectors and Jordan
canonical form
3.32
Non-‐‑diagonalizable matrices (general)
Note that:
• If
σ < 0,
–
t k eσ t , t k eσ t cos ω t, t k eσ t sin ω t ⎯t→∞
⎯⎯ →0
– Hence ZIT tends to zero (for some initial states)
• If
σ > 0,
–
t k eσ t , t k eσ t cos ω t, t k eσ t sin ω t ⎯t→∞
⎯⎯ →∞
– Hence ZIT tends to infinity (for some initial states)
• If
σ = 0,
1,cos ω t, sin ω t
– remain bounded
–
t k , t k cos ω t, t k sin ω t ⎯t→∞ ⎯⎯ → ∞ for k ≥ 1
– ZIT may remain bounded or tend to infinity
– Cannot tell just by looking at e-‐‑values
3.33
Non-‐‑diagonalizable matrices: Stability
Theorem 3.2: The system is:
• Asymptotically stable if and only if Re[λi ] < 0 ∀i
• Unstable if ∃i : Re[ λi ] > 0
• Subtle differences with diagonalizable case
– Asymptotic stability condition the same
– Instability condition sufficient but not necessary
• Reason is that if then
∀i Re[λi ] ≤ 0 but ∃i Re[λi ] = 0
stability not determined by e-‐‑values alone.
– ZIT may remain bounded for all initial conditions
– ZIT may go to infinity for some initial conditions
– If matrix non-‐‑diagonalizable, but no e-‐‑value with
is repeated then Theorem 3.1 still applies
Re[λi ] = 0
3.34
Zero state transition: Dirac function
• Can be thought of as a function of time which is
– Infinite at t=0
⎧⎪ ∞ if t = 0
– Zero everywhere else
δ (t) = ⎨
∞
⎪⎩ 0 if t ≠ 0
– Satisfies
∫ −∞
δ (t) = 1
• Can be thought of as the limit as ε à 0 of (among others)
⎧
⎪ 0 if t < 0
⎪
1ε ⎪ 1
δ ε (t) = ⎨ if 0 ≤ t < ε
⎪ ε
⎪
⎪ 0 if t ≥ ε
ε t
⎩
3.35
Impulse transition h(t) (n=m=1)
= m = 1 ⇒
• n x(t) ∈R, u(t) ∈R
= ax(t) + bu(t), a ∈R, b ∈R
x(t)
• State impulse transition h(t) is ZST
(x0 = 0) for u(t) = δ (t)
t
h(t) = ∫ Φ(t − τ )Bδ (τ )dτ Exercise: Show that
0
t
impulse transition also
=e ∫ e −aτ bδ (τ ) d τ = e at b ZIT for appropriate x0
at
0
• Impulse transition
t
1 −
h(t) = Φ(t)b = e RC
RC
• Unit step response: ZST with
⎧⎪ 1 t ≥ 0 t ⎛ −
t
⎞
u(t) = ⎨ ⇒ x(t) = ∫ h(t − τ ) ⋅1⋅ d τ = ⎜ 1− e ⎟RC
⎪⎩ 0 t < 0 0
⎝ ⎠
3.37
Impulse transition H(t) (general)
• For general n, m impulse transition given by matrix
⎡ h (t) . . h1m (t) ⎤
⎢ 11 ⎥
⎢ ⎥
H (t) = ⎢ . . . .
⎥ = Φ(t)B ∈ n×m
. . . .
⎢ ⎥
⎢⎣ hn1 (t) . . hnm (t) ⎥
⎦
• hij(t) equal to xi(t) when
x(0) = 0
–
H (t) = Φ(t)B
u j (t) = δ (t), uk (t) = 0 k ≠ j
–
• Again, ZST convolution of input with impulse transition
x(t) = (H ∗ u)(t) Integral computed
element by element
3.38
Output impulse response K(t)
• Usually interested in input-‐‑output behavior
• Output impulse response: output solution to
– Input δ(t)
– Initial state x(0)=0
• Combine impulse transition formula and output
map, output impulse response given by
4.3
Wien oscillator: Response
• For simplicity set
R1 = R2 = R,C1 = C2 = C
• Autonomous system (ZIΤ)
• Stability deftermined by sign of the real part of eigenvalues
• Eigenvalues are the roots of the characteristic polynomial
3− k 1
det(λ I − A) = 0 ⇒ λ + 2
λ+ 2
=0
RC (RC)
• For second order polynomials
aλ 2 + bλ + c = 0
the sign of real part of roots determined by signs of a, b, c
• This is NOT true for higher order polynomials, we need
HurwiJ test
4.4
Wien oscillator: Stability
a, b, c same sign ⇔ ∀i,Re ⎡⎣λi ⎤⎦ < 0 ⇔ Asymp.
stable Exercise:
a, b, c not same Prove this
sign
⇔ ∃i,Re ⎡⎣λi ⎤⎦ > 0 ⇔ Unstable
Some of
a, b, c = 0 à Degenerate case
Response
⎡ ⎤
k < 3 ⇔ ∀i,Re ⎣λi ⎦ < 0 ⇒ goes to 0
j Response oscillates with
k = 3 ⇔ λi = ± ⇔ frequency ω=1/RC
RC
Response goes to
⎡ ⎤
k > 3 ⇔ Re ⎣λi ⎦ > 0 ⇒ infinity (generally)
4.5
Wien oscillator: Eigenvalue locus
k − 3 ± (k −1)(k − 5) Exercise:
• Eigenvalues are
λ =
2RC Show this
Exercise: Simulate
• Real and negative
0 < k ≤ 1 the Wien oscillator
1< k < 3
• Complex, negative real part
for k = 0.5, 2, 3, 4, 6
and plot x1 vs. x2
• Imaginary
k = 3
Exercise: Plot locus
• Complex, positive real part
3 < k < 5 of the e-‐‑values as k
goes from 0 to
• Real and positive
k ≥ 5 infinity (in matlab)
• Roots real and equal (critical damping)
k = 1 or 5
4.6
System energy
• For the Wien oscillator
1 1 1 ⎡ C 0 ⎤
E(t) = C1vC2 (t) + C2 vC2 (t) = x(t)T ⎢ 1 ⎥ x(t)
2 1
2 2
2 ⎢ 0 C2 ⎥
⎣ ⎦
1
• Quadratic function of the state
E(t) = x(t)T Qx(t)
2
• Matrix Q
– Symmetric (Q=QT) (in this case diagonal)
– Positive definite
Q > 0, i.e. x Qx > 0 ∀x ≠0
T
=
( ) (
x(t)T AT + u(t)T BT Qx(t) x(t)T Q Ax(t) + Bu(t)
+
)
2 2
=
( ) (
x(t)T AT Q + QA x(t) u(t)T BT Qx(t) + x(t)T QBu(t)
+
)
2 2
• For autonomous systems or when u=0 (ZIT)
P(t) =
( )
x(t)T AT Q + QA x(t)
=−
x(t)T Rx(t)
(
, for R = − AT Q + QA )
2 2
4.8
System power
• Power also a quadratic of the state
Exercise: Show
• Matrix R is symmetric
R is symmetric
• If it is positive definite
R > 0, i.e. x T Rx > 0 ∀x ≠ 0
then energy decreases all the time
• Natural to assume that in this case system is stable
4.9
Lyapunov equation
• Lyapunov equation
AT Q + QA = −R
• A and R known, linear equation in unknown Q
• Can be re-‐‑wri[en as
Elements
of Q
Elements Âq = r Elements
of A
of R
• Because Q and R symmetric n(n+1)/2 equations in
n(n+1)/2 unknowns
• Fact 2.11 à equation has:
– Unique solution if non-‐‑
 singular
– Multiple solutions or no solutions if singular
Â
4.10
Lypunov functions
• Linear version of Lyapunov Theorem (Notes 7)
• Possible to solve efficiently (e.g. Matlab)
AT Q + QA = −R
• For any solving for Lyapunov equation for
R = RT > 0
unknown Q allows us to determine stability of
x(t) = Ax(t)
– Unique positive definite solution à Asymptotically stable
– No solution, multiple solutions
à Not asymptotically stable
– Non-‐‑positive definite solution
V : Rn → R
• Resulting energy-‐‑like function
1 T Exercise: Why is
V (x) = x Qx Lyapunov equation
2
linear? Is Lyapunov
known as Lyapunov function
function linear?
4.11
Input-‐‑State-‐‑Output relations
• Investigate the effect of
– Input on state
– State on output
• Two fundamental questions
1. Can I use inputs to “drive” state to desired value
2. Can I infer what the state is by looking at output
• Answer to 1. à Controllability
• Answer to 2. à Observability
• Answers hidden in structure of matrices A, B, and C
4.12
Controllability
• Consider a linear system
dx
(t) = Ax(t) + Bu(t)
dt x ∈R n ,u ∈R m , y ∈R p
y(t) = Cx(t) + Du(t)
4.13
Observations
x0 , x1
• In other words: For any we can find
u(⋅)
: [0,t]
→ m
such that
t
4.14
Observations
Fact 4.1: The
n
system is controllable over [0, t] if and only if for
all x1 ∈ there exists an input u(•) : [0,t] → m such that
x(t ) = x1 starting at x(0) = 0
Proof:
Exercise: Prove “only if” part
If: To drive the system from x(0)=x0 to x(t)=x 1 use input that drives
it from
x (0) = 0 to x (t) = x1 − e x0
At
Fact 4.2: The system is controllable over [0, t] if and only if for
all x0 ∈R n there exists an input u(•) : [0,t] → m such that
= 0 starting at
x(t) x(0) = x0
Proof:
Exercise: Prove “only if” part
If: To drive the system from x(0)=x0 to x(t)=x1 use input that drives
x (0) = x0 − e − At x1 to x (t) = 0
it from
4.15
Controllability gramian
Given time t, define controllability gramian
t Exercise: Show that
WC (t) = ∫ e Aτ BBT e A τ d τ ∈R n×n
T
W (t) = W T
(t) ≥ 0
0
C C
x0 ∈ n x1 ∈ n
Proof: If. Drive system from to in time t.
By Fact 4.1, assume x0=0 and select
T AT (t−τ ) Exercise: Complete
u(τ ) = B e −1
WC (t) x1 , τ ∈[0,t]
the “if” part
4.16
Controllability gramian
Only if: If WC(t) is not invertible then (Fact 2.12, 2.17)
z ∈ n
there exists with such that
z≠0
t
WC (t)z = 0 ⇔ z WC (t)z = 0 ⇔ ∫ z e BB e z d τ = 0
T T Aτ T AT τ
0
t
2
⇔ ∫ z e B d τ = 0 ⇔ z e B = 0 for all τ ∈[0,t]
T Aτ T Aτ
0 t
∫
Therefore
z T x(t) = z T e A(t−τ ) Bu(τ ) d τ = 0
0
4.17
Controllability test
Define the controllability matrix
P = [B AB A2 B An−1B] ∈R n×nm
Theorem 4.2: The system is controllable over [0, t] if and only
if
the rank of P is n
The rank of P is at most n since it has n rows (Fact 2.5)
Proof: We know that the system is controllable over [0, t] if and
z ∈ n
only if WC(t) is invertible. WC(t) is invertible if and only if for
WC(t)z=0 ó z=0
(else WC(t) has 0 as an eigenvalue, Fact 2.17). If we can show
WC(t) z=0 ó PTz=0
this would imply PT has rank n if and only if WC(t) is invertible.
4.18
Controllability test: Proof
As in the proof of Fact 4.3 we can show that
T AT τ
WC (t)z = 0 ⇔ B e z = 0 for all τ ∈[0,t]
T
By Taylor series, the last part holds if and only if BTeA τz and all its
derivatives at τ=0 are equal to zero, in other words
T AT τ d T AT τ
B e z = B T
z = 0 B e z = BT AT z = 0
τ =0 dτ τ =0
and so on, until
d n−1 T AT τ
B e z = B T
(A ) z=0
n−1 T
dτ n−1
τ =0
Higher derivatives (involving An, An+1, etc.) are then automatically zero
by the Cayley-‐‑Hamilton Theorem 2.1. Summarizing
W C
(t)z = 0 ⇔ P T
z=0
and the system is controllable if and only if P has rank n.
4.19
Example: OpAmp circuit
Ideal amplifier:
i1 (t) = i0 (t) + iC (t) + iL (t)
vin (t) vC (t) dvC (t)
⇒ = +C + iL (t)
R1 R0 dt
iL (t) L
dvC (t) 1 1 1
⇒ = − iL (t) − vC (t) + vin (t) +
vC (t)
dt C R0C R1C -
iC (t)
C
diL (t) diL (t) vC (t) i0 (t)
L = vC (t) ⇒ = R1 R0
dt dt L i1 (t)
−
4.21
Observations
• Easy test for controllability
• Requires matrix multiplications and rank test, instead
of integration of matrix exponential
• Proof of Theorem 4.2 implies the following
n
Corollary 4.1: The set of x1 ∈ for which ∃u(⋅) : [0,t] →
m
4.22
Minimum energy inputs
Consider as the “energy” of the input the quantity
t t
2
∫ u(τ ) u(τ ) d τ = ∫ u(τ ) d τ
T
0 0
In our example of p. 4.20
t t
2
∫ u(τ )
0
d τ = R1 ∫ vin (τ )i1 (τ ) d τ = R1 i(energy provided by vin )
0
4.23
Minimum energy inputs: Proof
Proof: In the proof of Fact 4.3 we saw that the proposed
um(.) drives the system from x(0)=0 to x(t)=x1. Its energy is
t t
u (τ )T u (τ ) d τ = x TW (t) −1 e A(t−τ ) BBT e AT (t−τ )W (t) −1 x d τ
∫
0
m m ∫0
1 C C 1
⎛ t
⎞
= x1 WC (t) ⎜ ∫ e
T −1 A(t−τ )
BB e T AT (t−τ )
d τ ⎟ WC (t) −1 x1 = x1TWC (t) −1 x1
⎝0 ⎠
To show that this energy is minimum, consider any other
input u(.) that drives the state from x(0)=0 to x(t)=x1. u(.) can
u(τ ) = um (τ ) + û(τ ).
be wri[en as So its energy will be
t t
∫ τ τ τ = ∫ m τ + τ (um (τ ) + û(τ )) d τ
T T
u( ) u( ) d (u ( ) û( ))
0 0
t
4.24
Minimum energy inputs: Proof
t t
∫
Since x(t)=x1, we have that
e A(t−τ ) Bû(τ ) d τ = 0
t t
0
t
∫0 m
and
∫0 ∫0 1 C e Bû(τ ) d τ = 0
−1 A(t−τ )
u (τ ) T
û(τ ) d τ = û(τ ) T
um
(τ ) d τ = x T
W (t)
Therefore
t t t
∫ τ τ τ ∫ τ τ τ ∫ m um (τ ) d τ
τ
−1
u( ) T
u( ) d = x1
T
WC
(t) x1
+ û( ) T
û( ) d ≥ u ( ) T
0 0 0
4.25
Observability
dx
(t) = Ax(t) + Bu(t)
dt x ∈R n ,u ∈R m , y ∈R p
y(t) = Cx(t) + Du(t)
Definition: The system is called observable over [0, t] if given
u(⋅) : [0,t] → m and y(⋅) : [0,t] → we can uniquely
p
4.26
Initial state observability
t
• Recall that
x(t) = e At x + e A(t−τ ) Bu(τ )dτ
0 ∫0
u(⋅) : [0,t] →
• Therefore to infer given
x(⋅) : [0,t] → n
m
4.29
Observability
• Therefore a state x is unobservable if an only if
Qx = 0
⎡ C ⎤
⎢ ⎥
⎢ CA ⎥ ∈R np×n
Q=
⎢ ⎥
⎢ n−1 ⎥
⎣ CA ⎦
4.30
Example: OpAmp circuit (p. 4.20)
⎡ 0 −1 ⎤ 1
• Q
therefore it is observable
=⎢ ⎥ ⇒ det(Q) = ≠ 0
⎣1 C 1 R0C ⎦ C
• We are only measuring one of the two states directly
• We can infer the value of the other state by its effect on the
measured state through the dynamics encoded in A
• Roughly speaking use measured state + all its derivatives to
deduce the value of the unmeasured states
4.31
Observability Gramian
One can also construct and observability gramian
t Exercise: Show that
∫
W (t) = e C Ce d τ ∈R
T
A τ T A τ n×n
O W (t) = W T
(t) ≥ 0
0
O O
Fact 4.5: The system is observable over [0, t] if and only
if WO(t) is invertible. If the system is observable over some
[0, t] then it is also observable over all [0, t’]
Notes
– Checking the rank of matrix Q is easier
Corollary 4.2: Set of
– Rank of Q at most n (n columns)
unobservable states
– Time of observation is immaterial
equal to Null(Q)
4.32
Output derivative interpretation
=
Consider differentiating y(t) along
x(t) Ax(t) + Bu(t)
y(t) = Cx(t) + Du(t)
y (t) = Cx(t)
+ Du(t)
= CAx(t) + CBu(t) + Du(t)
y(t) = CA2 x(t) + CABu(t) + CBu(t)
+ Du(t)
⎡ y(0) ⎤ ⎡ ⎤ ⎡ ⎡ ⎤
⎢ ⎥ ⎢ C D 0 0 ⎤ ⎢ u(0) ⎥
⎥ ⎢ ⎥
⎢ y (0) ⎥ ⎢ CA ⎥ x(0) + ⎢ CB D
0 ⎥ ⎢ u(0) ⎥
⎢ ⎥=⎢ ⎥ ⎢ CAB CB 0 ⎥⎢ ⎥
⎢ ⎥ ⎢ ⎢ ⎥
n−1 ⎥ ⎢ n−2 n−3 ⎥
⎢⎣ y (n−1) (0) ⎥⎦ ⎣ CA ⎦ ⎣ CA B CA B D ⎦ ⎢ u (n−1) (0)
⎣ ⎥⎦
Y = Qx(0) + KU Y ∈R , K ∈R
np np×nm
,U ∈R nm
4.33
Output derivative interpretation
Y = Qx(0) + KU
Measured Q full rank Known
• System of linear equations to be solved for x(0)
• If p=1,
Q ∈R n×n has rank n ⇒ Q invertible
x(0) = Q −1 (Y − KU )
• If p>1, more equations than unknowns, least squares
solution. If Q has rank n, pseudo-‐‑inverse (Fact 2.14)
( )
−1
x(0) = Q Q T
Q T (Y − KU )
4.34
But …
• Differentiating measurements is a bad idea
• Noise gets amplified
• Intuition: Sinusoidal signal corrupted by small
amplitude, high frequency noise
y(t) = asin(ω t) + bsin(ω n t) b a, ω n ω
a
• Signal-‐‑to-‐‑noise ratio:
SNR = 1
b
aω a
y (t) = ω a cos(ω t) + ω n bcos(ω n t) ⇒ SNR =
bω n b
aω 2
aω
y(t) = −ω asin(ω t) − ω n bsin(ω n t) ⇒ SNR =
2 2
bω n
2
bω n
• Derivative of signal soon becomes useless
4.35
Observers
• Instead of differentiating, build a “filter”
• Progressively construct estimate of the state
x (t) ∈R n
• x (0) ∈R n
Start with some (arbitrary) initial guess
• Measure y(t) and u(t)
• Update estimate according to
dx
(t) = Ax (t) + Bu(t) + L ⎡⎣ y(t) − Cx (t) − Du(t) ⎤⎦
dt
• Mimic evolution of true state, plus correction term
• Gain matrix L
• Error dynamics
e(t) = x(t) − x (t) ⇒ e(t)
= (A − LC)e(t)
4.36
Observers
Theorem 4.4: If the system is observable, then L can be chosen
such that eigenvalues of (A-LC) have negative real parts.
4.37
Kalman decomposition
T ∈R n×n
There exists change of coordinates invertible such that:
⎡ x̂1 (t) ⎤ ← controllable & observable
⎢ ⎥
⎢ x̂2 (t) ⎥ ← controllable & unobservable
x̂(t) = Tx(t) = ⎢ ⎥
⎢ x̂3 (t) ⎥ ← uncontrollable & observable
⎢ ⎥
⎢⎣ x̂4 (t) ⎥ ← uncontrollable & unobservable
⎦
⎡ 0 ⎤ ⎡
⎢
Â11 0 Â13
⎥ B̂1 ⎤
⎢ ⎥
⎢ Â21 Â22 Â23 Â24 ⎥ ⎢
 = TAT = ⎢
−1
⎥ , B̂ = TB = ⎢ B̂2 ⎥
⎥
⎢ 0 0 Â33 0 ⎥ ⎢ 0 ⎥
⎢ ⎥ ⎢
⎢⎣ 0 0 Â43 Â44 ⎥ ⎣ 0 ⎥⎦
⎦
Ĉ = CT −1 = ⎡ Ĉ1 0 Ĉ3 0 ⎤
⎢⎣ ⎥⎦
4.38
Kalman decomposition
⎛ ⎡ Â 0 ⎤ ⎡ B̂1 ⎤⎞ ⎛ ⎡ Â Â13 ⎤ ⎡ ⎞
⎜ ⎢ 11 ⎥,⎢ ⎥⎟ controllable, ⎜ ⎢ 11 ⎥ , Ĉ Ĉ ⎤⎟ observable
Â33 ⎥ ⎢⎣
3 ⎥
⎜ ⎢ Â21 Â22 ⎥ ⎢ B̂2 ⎥⎟⎠ ⎜⎢ 0 1
⎦⎟
⎝⎣ ⎦ ⎣ ⎦ ⎝⎣ ⎦ ⎠
C+O
x̂4 (t)
4.39
Stabilizability and detectability
Definition: The system is detectable if all eigenvalues of Â22 and
Â44 in the Kalman decomposition have negative real part.
Can design observer for observable part
⎛ ⎡ Â Â ⎤ ⎞
13 ⎥ ⎡
with overall observation error decaying
⎢ ⎜ ⎢ 11
, Ĉ1 Ĉ3 ⎤⎟
⎜ 0 Â33 ⎥ ⎢⎣ ⎥⎦⎟
to zero
⎝⎣ ⎦ ⎠
Definition: The system is stabilizable if all eigenvalues of Â33 and
Â44 in Kalman decomposition have negative real part.
Can design controller for controllable part ⎛ ⎡ Â 0 ⎤ ⎡ B̂ ⎤⎞
which ensures overall system ⎜ ⎢ 11 ⎥ , ⎢ 1 ⎥⎟
⎜ ⎢ Â21 Â22 ⎥ ⎢ B̂2 ⎥⎟
asymptotically stable ⎝⎣ ⎦ ⎣ ⎦⎠
4.40
Signal-‐‑ und Systemtheorie II
D-‐‑ITET, Semester 4
Notes 5: Continuous LTI
systems, frequency domain
John Lygeros
0
• Recall that we assume that f(t)=0 for all t<0 (p. 0.22)
• Can also be defined for matrix valued functions
f :R→R n×m
F :C→ C n×m
5.2
Laplace Transform: Properties
Assumption: The function f(t) is such that the integral can be
defined, i.e. f (t)e − st ⎯t→∞
⎯⎯ → 0 “quickly enough”
{ }
• Linearity
L a1 f (t) + a2 g(t) = a1F(s) + a2G(s)
{ −at
}
• s shift
L e f (t) = F(s + a)
⎧d ⎫
• Time derivative
L ⎨ f (t) ⎬ = sF (s) − f (0)
⎩ dt ⎭
{ }
• Convolution
L ( f * g)(t) = F (s)G(s)
Exercise: Prove these
properties using the Recall discussion
definition
on p. 0.22
5.3
Laplace Transform: Useful functions
{ }
A. Dirac function
L δ (t) = 1
5.4
Inverse Laplace Transform
• Defined as an integral
Exercise: Compute the
Laplace transform of:
• Laplace transforms of interest here
f (t) = t, f (t) = t n ,
will be proper, rational functions
– Ratio of two polynomials in s
f (t) = te −at ,
– Degree of numerator less than or
f (t) = e −at cos(ω t),
equal to degree of denominator
d2
• In this case use partial fractions
g(t) = 2 f (t),
dt
• Example:
f (t) = sin(ω t + θ )
⎧
−1 1 ⎫ −1 ⎧ 1 ⎫ −1 ⎧ 1 1 ⎫
L ⎨ 2 ⎬= L ⎨ ⎬= L ⎨ − ⎬
⎩ s + 3s + 2 ⎭ ⎩ (s + 1)(s + 2) ⎭ ⎩s +1 s + 2⎭
−1 ⎧ 1 ⎫ −1 ⎧ 1 ⎫ −t −2t
=L ⎨ ⎬− L ⎨ ⎬=e −e
⎩ s + 1⎭ ⎩s + 2⎭
5.5
Back to LTI systems
Time domain: dx(t)
= Ax(t) + Bu(t)
dt
y(t) = Cx(t) + Du(t)
x(t) ∈R n , u(t) ∈R m , y(t) ∈R p , t ∈R
Take Laplace Transform
⎧ dx(t) ⎫
L⎨ { }
⎬ = L Ax(t) + Bu(t) ⇒ sX (s) − x(0) = AX (s) + BU (s)
⎩ dt ⎭
−1 −1
Laplace domain: X (s) = (sI − A) x0
+ (sI − A) BU (s)
Y (s) = CX (s) + DU (s)
X (s) ∈C n , U (s) ∈C m , Y (s) ∈C p , s ∈C
5.6
Comparison with time domain
• Time domain solution
t
{ }
L e At = (sI − A) −1 ∈C n×n
5.7
Example (p. 3.18)
R L
⎡ 1 ⎤
⎢ 0 ⎥ ⎡ v (t) ⎤ ⎡ 0 ⎤
d ⎡⎢ vC (t) ⎤⎥ ⎢ C ⎥⎢ C ⎥ + ⎢ 1 ⎥ vs (t)
+
vs (t ) C =
⎥ ⎢ iL (t) ⎥ ⎢ ⎥
−
dt ⎢ iL (t) ⎥ ⎢ 1 R
⎣ ⎦ ⎢ − − ⎥⎣ ⎦ ⎢⎣ L ⎥⎦
⎣ L L ⎦
−1 1 ⎡ s+3 2 ⎤
(sI − A) = 2 ⎢ ⎥
s + 3s + 2 ⎣ −1 s ⎦
5.8
Example: Transition Matrix
⎧⎪ ⎡ s + 3 2 ⎤ ⎫⎪
−1
{ −1
Φ(t) = L (sI − A) = L ⎨ 2 } −1 1
⎢
⎪⎩ s + 3s + 2 ⎣ −1 s ⎦ ⎪⎭
⎥⎬
⎧⎡ s+3 2 ⎤⎫
⎪⎢ 2 ⎥⎪
⎪ ⎪
= L−1 ⎨ ⎢ s + 3s + 2 s + 3s + 2 ⎥ ⎬
2
⎪ ⎢⎢ −1 s ⎥⎪
⎥
⎪ ⎣ s 2 + 3s + 2 s 2 + 3s + 2 ⎦ ⎪
⎩ ⎭
⎧⎡ 2 1 2 2 ⎤⎫
⎪⎢ − − ⎥⎪
⎪ ⎪
= L−1 ⎨ ⎢ s + 1 s + 2 s + 1 s + 2 ⎥ ⎬
⎪ ⎢ −1 + 1 −1
+
2 ⎥⎪
⎪⎩ ⎢⎣ s + 1 s + 2 s + 1 s + 2 ⎥⎦ ⎪⎭
⎡ 2e −t − e −2t 2e −t − 2e −2t ⎤
Φ(t) = ⎢ ⎥ As before!
−t
⎢⎣ −e + e
−2t
−e −t + 2e −2t ⎥⎦ (p. 3.19)
5.9
Example: Step transition (p. 3.21)
ZST with input (recall that )
vs (t) = V for t ≥ 0 vs (t) = 0 for t < 0
Laplace transform
Vs (s) = V s
⎡ 2 ⎤
⎢ ⎥
−1 ⎢ s(s + 1)(s + 2) ⎥
X (s) = (sI − A) BVs (s) = ⎢ ⎥ V
s
⎢ ⎥
⎢⎣ s(s + 1)(s + 2) ⎥⎦
⎡ 1 2 1 ⎤
⎢ − + ⎥
= ⎢ s s + 1 s + 2 ⎥V No need to compute
⎢ 1 1 ⎥ entire (sI-‐‑A)-‐‑1, just
⎢ − ⎥ second column
⎣ s + 1 s + 2 ⎦
⎡ −2e −t + e −2t + 1 ⎤
x(t) = L −1
{ }
X (s) = ⎢
e −t − e −2t
⎥V
⎢⎣ ⎥⎦
5.10
Example: Step transition
⎡ 2 ⎤
⎢ ⎥
⎢ s(s + 1)(s + 2) ⎥V
X (s) = ⎢ ⎥
s
⎢ ⎥
⎢⎣ s(s + 1)(s + 2) ⎥⎦
Initial value theorem
x(0) = limt→0 x(t) = lim s→∞ sX (s)
⎡ 2 ⎤
⎢ ⎥
x(0) = lim s→∞ ⎢⎢
(s + 1)(s + 2) ⎥V = ⎡ 0 ⎤ (ZST)
⎥ ⎢ ⎥
s ⎣ 0 ⎦
⎢ ⎥
⎢⎣ (s + 1)(s + 2) ⎥⎦
⎡ ⎤
⎢
2
⎥
(s + 1)(s + 2) ⎥ ⎡ V ⎤
Final value theorem
limt→∞ x(t) = lim s→0 ⎢⎢ ⎥ V =⎢ ⎥
s ⎣ 0 ⎦
⎢ ⎥
⎢⎣ (s + 1)(s + 2) ⎥⎦
5.11
Example: Sinusoidal input
ZST with input
vs (t) = V sin(t)
Laplace transform
Vs (s) = V s 2 + 1
⎡ 2 ⎤
⎢ 2 ⎥
−1 ⎢ (s + 1)(s + 1)(s + 2) ⎥
X (s) = (sI − A) BVs (s) = ⎢ ⎥V
s
⎢ 2 ⎥
⎢⎣ (s + 1)(s + 1)(s + 2) ⎥⎦
⎛ −3s + 1 1 2 ⎞
VC (s) = X 1 (s) = ⎜ 2 + − ⎟ V
⎝ 5(s + 1) s + 1 5(s + 2) ⎠
3V V −t 2 −2t
vC (t) = − cos(t) + sin(t) +Ve − Ve
5 5 5
External input Eigenvalue
response
response
5.12
Example: Sinusoidal input
• The system is stable, so as
t → ∞
2 Transient
Ve − Ve −2t → 0
−t
solution
5
3V V Steady state
vC (t) → − cos(t) + sin(t) solution
5 5
• In general, for stable systems with sinusoidal input steady
state solution is also sinusoidal with
– Same frequency as input
– Amplitude and phase determined by system matrices
5.13
Transfer function
• Consider ZSR
X (s) = (sI − A) −1 BU (s)
Y (s) = CX (s) + DU (s)
(
Y (s) = C(sI − A) −1 B + D U (s) )
• Transfer function
−1
G(s) = C(sI − A) B + D ∈C p×m
⎡ ⎤ 2
– If we measure y=vC
C = 1 0 , D = 0 ⇒ G(s) =
⎣ ⎦ (s + 1)(s + 2)
5.14
Transfer function structure
• System called
– Single input, single output (SISO) if m=p=1
– Multi-‐‑input, multi-‐‑output (MIMO) if m or p >1
• SISO à B, CT vectors of dimension n, D a real number
{ }
L{K (t)} = L Ce At B + Dδ (t) = C(sI − A) −1 B + D
5.18
Transfer function and stability
• From our knowledge of time domain solutions
– If poles are distinct system is
• Asymptotically stable if and only if
Re[ pi ] < 0,∀i
• Stable if and only if
∀i Re[ pi ] ≤ 0
• Unstable if and only if
∃i : Re[ pi ] > 0
– If poles are repeated system is
• Asymptotically stable if and only if
Re[ pi ] < 0,∀i
• Unstable if
∃i : Re[ pi ] > 0
• If system may be stable
∀i Re[ pi ] ≤ 0 and ∃i Re[ pi ] = 0
or unstable, depending on partial fraction expansion
(cf. “depending on eigenvectors” of matrix A, Notes 3)
• Provided there are no pole zero cancellations
5.19
Block diagrams
G1(s) G2(s) ⇔ G2(s)G1(s)
G1(s)
+
+
+
⇔ G2(s)+G1(s)
G2(s)
+ G1(s)
+-
⇔ [1+G1(s)G2(s)]-1G1(s)
G2(s)
Caution: MIMO transfer functions in general matrices!
5.20
Block diagrams
u y
+
K1(s) + K2(s) G(s)
-
K3(s)
Exercise: In the
−1
Y (s) = [1+ G(s)K 2 (s)K 3 (s)] G(s)K 2 (s)K1 (s)U (s) SISO case, show
that if G(s) strictly
proper, K1(s), K2(s),
• In the SISO case: Composition or rational transfer
K3(s) proper then
functions is also a rational transfer function closed loop
• Properties of “closed loop” system studied using transfer function is
the same tools strictly proper
5.21
Frequency response
• In RLC example, steady state response to sinusoidal
input is sinusoidal
• More generally consider proper, stable SISO system
with transfer function G(s)
• Apply u(t)=sin(ωt)
• Output secles to sinusoid y(t)=Ksin(ωt+φ) with
– The same frequency, ω
– Amplitude
K = G( jω ) = Re[G( jω )]2 + Im[G( jω )]2
⎛ Im[G( jω )] ⎞
φ = ∠G( jω ) = tan ⎜
– Phase
−1
⎝ Re[G( jω )] ⎟⎠
ω
• Shown by partial fraction expansion of
Y (s) = G(s) 2
(s + ω 2 )
5.22
Frequency response
• Response of system to sinusoids at different
frequencies called the frequency response
• Frequency response important because
– Sinusoids are common inputs
– Directly related to any other input by Fourier transform
– Frequency response tells us a lot about system behavior
– E.g. Will it be stable under various interconnections?
• Frequency response usually summarized graphically
G( jω ) vs. ω , log plot
– Bode plots: Log-‐‑log plot lin-‐‑ ∠G( jω ) vs. ω
– Nyquist plot: G(jω) in polar coordinates, parameterized by ω
G( jω ) vs. ∠G( jω ),
– Nichols chart: Log-‐‑lin plot
parameterized by ω
5.23
Bode plots (bode.m)
log (ω ) (in rad/sec) axes
Pair of plots, x-‐‑axis the same , y-‐‑
( )
20log G( jω ) (in dB) and ∠G( jω ) (in degrees)
5.25
Resonance
• Appears in second order systems (two poles)
• Bode magnitude plot has maximum at some frequency
• Sinusoidal inputs around this frequency get amplified
• Important consequences for performance
• Second order systems very common in practice
• Example: Simplified suspension model
k d 1
• For suspension example:
ω n = , ζ= , K =−
M 2 km k
Natural Damping
frequency
Gain
ratio
• Frequency response
Kω n2
G( jω ) =
(ω ) + (2ζω ω )
2 2
Kω 2 2
−ω 2
G( jω ) = n n n
(ω n2 − ω 2 ) + j(2ζω nω ) ⎛ 2ζω nω ⎞
∠G( jω ) = −tan ⎜ 2 −1
2⎟
⎝ ωn − ω ⎠
5.27
Resonance
1. For stability need
ζ ≥0 Exercise: Verify 1-‐‑5
2. For poles real (over-‐‑
ζ ≥1 damped system)
3. For poles real and equal (critical damping)
ζ =1
4. For poles complex (under-‐‑
0 <ζ <1 damped system)
5. For poles imaginary (undamped system)
ζ =0
1
ζ≥
6. For magnitude Bode plot decreasing in w
2
1
7. For magnitude Bode plot has a maximum
0 ≤ζ <
2
Exercise: Take
K
ω = ω n 1− 2ζ 2
at and
G( jω ) = the derivative
2ζ 1− ζ 2 G( jω )
of to
verify 6-‐‑7
5.28
Example: AFM Resonances
5.29
Transfer function realization
• Time domain description à unique transfer function
dx ⎫
(t) = Ax(t) + Bu(t) ⎪ −1
dt ⎬ ⎯ ⎯
→ G(s) = C(sI − A) B+D
y(t) = Cx(t) + Du(t) ⎪
⎭
• Transfer function à unique time domain description??
⎧ dx
(s − z1 )(s − z2 )(s − zk ) ? ⎪ (t) = Ax(t) + Bu(t)
G(s) = ⎯⎯→ ⎨ dt
(s − p1 )(s − p2 )(s − pn ) ⎪ y(t) = Cx(t) + Du(t)
⎩
• Given G(s), choice of A, B, C, D such that
C(sI
known as a realization of G(s)
− A) −1 B + D = G(s)
• Clearly not unique, e.g. coordinate change
x̂ = Tx, det(T ) ≠ 0
5.30
Realization: SISO, strictly proper system
b1s n−1 + b2 s n−2 + + bn
• SISO, strictly proper system
G(s) = n
s + a1s n−1 + a2 s n−2 + + an
⎡ 0 0 … −an ⎤ ⎡ bn ⎤
⎡ 0 1 0 … 0 ⎤ ⎡ 0 ⎤ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ 1 0 … −an−1 ⎥ ⎢ bn−1 ⎥
⎢ 0 0 1 … 0 ⎥ ⎢ 0 ⎥ ⎢ ⎥ ⎢ ⎥
=⎢
x(t) ⎥ x(t) + ⎢ ⎥ u(t) =⎢
x(t) 0 1 … −an−2 ⎥ x(t) + ⎢ bn−2 ⎥ u(t)
⎢ 0 0 0 … 1 ⎥ ⎢ 0 ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ −an −an−1 −an−2 … −a1 ⎥ ⎢ 1 ⎥ ⎢ 0 0 … −a1 ⎥ ⎢ b1 ⎥
⎣ ⎦ ⎣ ⎦
⎣ ⎦ ⎣ ⎦
y(t) = ⎡ bn bn−1 bn−2 … b1 ⎤ x(t)
⎣ ⎦ y(t) = ⎡ 0 0 … 1 ⎤ x(t)
⎣ ⎦
5.31
Uncontrollable and unobservable systems
⎡ ⎤ ⎡ 1 ⎤ ⎡ −1 0 ⎤ ⎡ 1 ⎤
−1 1 =⎢
=⎢
x(t) ⎥ x(t) + ⎢ ⎥ u(t), x(t) ⎥ x(t) + ⎢ ⎥ u(t),
⎣ 0 1 ⎦ ⎣ 0 ⎦ ⎣ 1 1 ⎦ ⎣ 0 ⎦
y(t) = ⎡ 1 0 ⎤ x(t) y(t) = ⎡ 1 0 ⎤ x(t)
⎣ ⎦ ⎣ ⎦
1
1. In both cases transfer function
G(s) =
s +1
2. = −x(t) + u(t), y(t) = x(t) ∈R Exercise: Show
Same as
x(t)
points 1-‐‑5
3. Original state space system unstable
4. Transfer function poles have negative real parts!
5. Pole-‐‑zero cancellation of term corresponding to
uncontrollable/unobservable part
6. Can be shown in general using Kalman decomposition
5.32
In summary
• Transfer function alternative system description to
state space
• Closely related, not equivalent
• Advantages
+ Coordinate independent
+ Easier to manipulate for system composition
+ Easier to compute response to “complicated” inputs
+ Immediate connection to steady state sinusoidal response
+ May also work for systems that do not have state space
description (e.g. delay elements)
• Disadvantages
– Less natural in terms of physical laws
– Used mostly for ZSR
– May contain less information than state space description
– Unobservable and uncontrollable parts lost
5.33
Signal-‐‑ und Systemtheorie II
D-‐‑ITET, Semester 4
Notes 6: Discrete time LTI
systems
John Lygeros
6.2
Sampled Data Systems
• In “embedded” computational systems digital
computer has to interact with analog environment.
– Measurements of physical quantities processed by
computers
– Decisions of computer applied to physical system
• Requires transformation of real valued signals of real
time to discrete valued signals of discrete time and
vice-‐‑versa
– Analog to digital conversion (A/D or ADC)
– Digital to analog conversion (D/A or DAC)
6.3
Sampled Data Systems
• Usually value quantization is quite accurate.
• Here we ignore value quantization, we concentrate
on time quantization.
• Assume:
– “ADC”àsample every T
– “DAC”àzero order hold
6.4
Sampled Data Systems
COMPUTER
Z.O.H
SYSTEM
= Ax(t) + Bu(t)
x(t)
y(t) = Cx(t) + Du(t)
6.5
Sampled Data Linear Systems
How does linear system with sampling and zero
order hold look like from computer?
x(t)
= Ax(t) + Bu(t) A ∈ n×n
B ∈ n×m
y(t) = Cx(t) + Du(t) C ∈ p×n D ∈ p×m
u(t) = uk for all t ∈[kT ,(k + 1)T )
y k
= y(kT )
t ∈[kT ,(k + 1)T )
For
t
x(t) = e A(t−kT )
x(kT ) + ∫ e A(t−τ ) Bu(τ )dτ
kT
6.6
Sampled Data Linear Systems
( ) ⎛ e A((k +1)T −τ ) Bdτ ⎞ uk
(k +1)T
x (k + 1)T = e x(kT ) + ∫
AT
⎝ kT ⎠
= e x(kT ) + ∫ e A(T −τ ) Bdτ ⎞ uk
⎛ T
AT
⎝ 0 ⎠
y(kT ) = Cx(kT ) + Du(kT )
(k +1)T T
∫ Bdτ = ∫ e A(T −τ ) Bdτ
A((k +1)T −τ )
Exercise: Show that e
kT 0
x0 = x̂0
xk +1 = Axk + Buk , k = 0,1,…, N −1
yk = Cxk + Duk , k = 0,1,…, N
6.8
Solution of Discrete Time Linear Systems
k −1
xk = A x̂0 + ∑ A
k k −i−1
Bui
i=0
Exercise: Prove
this by induction
ZIT
ZST
6.9
Computation of solution
• Hard part is computation of (c.f. )
• If matrix is diagonalizable
⎡ λ k ... 0 ⎤
⎢ 1 ⎥
Λk = ⎢ ⎥
⎢ ⎥
⎢ 0 ... λ k
n ⎥
⎣ ⎦
Definition: The system is called stable if for all ε > 0 there exists
δ > 0 such that x0 ≤ δ ⇒ xk ≤ ε for all k = 0,1,… It is called
asymptotically stable if in addition lim k→∞ xk = 0 . A system that
is not stable is called unstable.
6.10
Stability, diagonalizable matrices
• If matrix diagonalizable, Ak linear combination of
λi
–
= σ i ± jω i , λi = σ i2 + ω i2 Exercise: Show that if
A is diagonalizable and
–
∀i, Re ⎡⎣λ1 ⎤⎦ < 0 then
A = e AT
–
is diagonalizable and
∀i, λi < 1 .
–
6.12
Deadbeat response
• Assume all eigenvalues of A are zero:
In general proved
using Jordan form
xk = Ak x0 = 0 for all k ≥ N
• Then
• ZIT gets to 0 in finite time and stays there.
• This never happens with continuous time systems.
6.13
Coordinate change
x̂k = Txk
• Assume for some invertible
T ∈R n×n
6.14
Energy and Power
• Consider “energy like” function:
• If (autonomous system)
6.15
Stability and Energy
• If then energy
Exercise: Show that
decreases all the time
R=RT
• Natural to assume that system is stable
Theorem 6.3: for all i=1, 2, …, n if and only
if for all the equation
has a unique solution with .
6.16
Controllability
• System is controllable if we can steer it from any
x̂0 ∈R n
x̂ N
∈R
initial condition to any final condition
n
P = [B AB 2
A B A n−1
B] ∈R n×nm
6.17
Observability
• System is observable if we can infer the state
evolution by observing the
input and output sequences
N ≥ n −1
• Assume
⎡ C ⎤
• Define again
⎢ ⎥
⎢ CA ⎥
observability
Q = ∈R np×n
⎢ ⎥
matrix
⎢ n−1 ⎥
⎣ CA ⎦
Theorem 6.5: The system is
observable if and only if Q Exercise: Prove this
has rank n.
6.18
z Transform
• Time function fk converts to a complex variable function F(z)
f :N→ R F :C→ C
∞
fk Z
Z
F (z)
−1
{ } ∑
F (z) = Z f k = fk z −k
k =0
• We implicitly assume that fk=0 for all k<0 (cf. p.0.22)
• Can also be defined for matrix valued functions by taking
sum element by element
z ∈C
• can be thought of as unit time delay
fk z f k +1
6.19
z Transform: Properties
Assumption: The function fk is such that the sum converges
{ }
• Linearity
Z a1 f k + a2 g k = a1F(z) + a2G(z)
Exercise: Prove
{ }
• Time shift
Z f k −k0 = z
− k0
F (z) these
{
• Convolution
Z ( f * g) k = Z } {∑ k
i=0 }
f i g k −i = F (z)G(z)
• Some common functions:
– Impulse function
Z {δ } = 1
k
(δ 0 = 1, δ k = 0 if k ≠ 0)
z
– Step function
Z {1 } = (1k = 1 k ≥ 0,1k = 0 k < 0)
k
z −1
– Geometric progression
Z {a } k
=
z
z−a
( a < 1)
6.20
Transfer function
• Assume
• Take z transform of all signals
(
Y (z) = ⎡C zI − A ) B + D ⎤U (z)
−1
⎢⎣ ⎥⎦
Transfer
Function
Exercise: Show that the transfer function is z-‐‑transform
of “impulse response” (appropriately defined!)
6.21
Transfer function
( )
−1
G(z) = C zI − A B+D
6.24
For example
w2
w1
6.25
For example
xk +1 = (I + Aδ )xk + δ Buk
6.27
Numerical approximation
Integration using Euler method
First order approximation of the equation
6.28
Zero input response
• Consider autonomous system
• Solution is
Eigenvector
matrix
(invertible)
6.30
Stability of numerical approximation
• Then
Exercise: Prove this
6.31
RLC circuit with R=3W, L=1H, C=0.5F
δ = 0.01 δ = 0.05
6.32
RLC circuit with R=3W, L=1H, C=0.5F
«Exact» Numerical
solution approximation Instability!
δ = 0.25 δ = 1.25
6.34
Signal-‐‑ und Systemtheorie II
D-‐‑ITET, Semester 4
Notes 7: Nonlinear systems
John Lygeros
∃λ > 0,∀x, x̂ ∈R , n
f (x) − f ( x̂) ≤ λ x − x̂
• This implies existence and uniqueness of solutions
• In general solution cannot be computed analytically
• Simulation methods applicable however
• Look into the following issues
– Invariant sets
– Stability of invariant sets
7.3
Invariant sets
• Generalization of notion of equilibrium
7.4
Equilibria
• Linear systems have a linear subspace of equilibria
– Sometimes only
xˆ = 0
– More generally, the null space
Exercise: Show that the
of the matrix A
= Ax(t)
equilibria of
x(t)
coincide with the null
space of A
• Nonlinear systems can have many isolated equilibria
• Example: The pendulum from Notes 1 has 2 equilibria
⎡ x2 (t) ⎤
⎢ ⎥ ⎡ 0 ⎤ ⎡ π ⎤ Exercise:
=⎢ d
x(t) g ⎥ ⇒ x̂ = ⎢ ⎥ , x̂ ' = ⎢ ⎥ Show this
⎢ − m x2 (t) − l sin x1 (t) ⎥ ⎣ 0 ⎦ ⎣ 0 ⎦
⎣ ⎦
(More precisely, number of pendulum equilibria is
infinite, but they all coincide physically with these two)
7.5
Pendulum for d=0
7.6
Shifting equilibria to the origin
• It is often convenient to “shift” an equilibrium to the
origin before analyzing the system behavior
• This involves a change of coordinates
w(t) = x(t) − x̂ ∈R n
7.7
Limit cycles
• Observed only in systems of dimension 2 or more
Definition: A solution x(t) is called a periodic orbit if
∃T > 0,∀t ≥ 0, x(t + T ) = x(t)
7.9
Example: van der Pol oscillator, ε=1
Exercise: Let
Stable be a
xk +1 = f (xk )
limit nonlinear system
cycle in discrete time.
How would you
define periodic
orbits and limit
cycles for this
system?
(cf. p.1.28).
Unstable
equilibrium
7.10
Strange aGractors
• In 2D continuous time equilibria & limit cycles as bad
as it gets (Poincare-‐‑Bendixson Theorem)
• In higher dimensions stranger things may happen
– Invariant tori
– Chaotic aeractors
• Example: Lorenz equations
– Developed by E.N. Lorenz
– To capture atmospheric phenomena
7.11
Chaotic aGractor
• For some parameter values, there is a bounded subset
of the state space such that if we start inside we stay
there for ever and
– Most trajectories go around for ever,
– Without ever meeting themselves (not limit cycles)
• Given any two points in this set we can find a
trajectory that starts arbitrarily close to one and ends
up arbitrarily close to the other
• This set is called a chaotic or strange aeractor
7.12
Lorenz aGractor simulation
7.13
Stability
• Most commonly studied property of invariant sets
• Trajectories stay close or converge to invariant set
• Restrict aeention to equilibria
• Simple characterization for LTI and equilibrium
x̂ = 0
– Systems stable if eigenvalues of A have negative real part
– Poles of transfer function are in left half of complex plane
7.14
Asymptotic stability
• Stability says that if we start close we stay close
• Do we get closer and closer?
Definition: An equilibrium x̂ is called locally asymptotically stable
if it is stable and there exists M > 0 such that
x0 − x̂ < M ⇒ lim x(t) = x̂
t→∞
It is called globally asymptotically stable if this holds for any M >0.
The set of x0 such that lim x(t) = x̂ is called the domain of attraction
t→∞
of x̂
Exercise: What is the Exercise: Is there a
domain of aeraction of a difference between local
globally asymptotically and global asymptotic
stable equilibrium?
stability for linear systems?
7.15
Example: Pendulum with d>0
7.16
Linearization
• Simple way to study stability of equilibrium of
nonlinear system is to approximate by linear system
= f (x(t)), f ( x̂) = 0
x(t)
x̂ equilibrium
• Take Taylor expansion about
x̂
f (x) = f ( x̂) + A(x − x̂) + higher order terms in (x − x̂)
= A(x − x̂) + higher order terms in (x − x̂)
⎡ ∂f1 ∂f1 ⎤
⎡ x ⎤ ⎡ ⎤ ⎢ ( x̂) ( x̂) ⎥
f1 (x1 ,…, xn ) ⎢ ∂x1 ∂xn ⎥
⎢ 1 ⎥ ⎢ ⎥ ⎢ ⎥
x = ⎢ ⎥ , f (x) = ⎢ ⎥, A = ⎢ ⎥ ∈R n×n
⎢ x ⎥ ⎢ f n (x1 ,…, xn ) ⎥ ⎢ ∂f n ∂f n ⎥
⎢⎣ n ⎥⎦ ⎢⎣ ⎥⎦ ⎢ ( x̂) ( x̂) ⎥
⎢⎣ ∂x1 ∂xn ⎥⎦
7.17
Linearization
δ x(t) = x(t) − x̂ ∈R n
• Consider distance of x to equilibrium
• When x close to equilibrium, δx is small and
dδ x(t)
≈ Aδ x(t)
dt
• So close to equilibrium nonlinear system expected to
behave like a linear system
• In particular, stability of the linearization should tell
us something about stability of nonlinear system
• Stability of linearization can be determined just by
looking at the eigenvalues of A
7.18
Stability by linearization
7.19
Pendulum example, d>0
• Linearization about
x̂ = (0,0)
⎡ 0 1 ⎤
dδ x(t) ⎢ ⎥ d g
=⎢ g d ⎥ δ x(t) ⇒ λ + λ + = 0
2
dt − − m l
⎢⎣ l m ⎥⎦
• Eigenvalues have negative real part, hence
equilibrium locally asymptotically stable
• Linearization about
x̂ = (π ,0)
⎡ 0 1 ⎤
dδ x(t) ⎢ ⎥ d g
=⎢ g d ⎥ δ x(t) ⇒ λ + λ − = 0
2
dt − m l
⎢⎣ l m ⎥⎦
• At least one eigenvalue has positive real part, hence
equilibrium unstable
7.20
Linearization can be inconclusive
• Notice that if d=0
– x̂ = (π ,0)
Linearization about has positive eigenvalue
– x̂ = (π ,0)
Hence is unstable for nonlinear system
– x̂ = (0,0)
Linearization about has imaginary eigenvalues
– x̂ = (0,0)
Stability of not determined from Theorem 7.1
• It turns out that equilibrium is stable (see fig. on p.7.6)
• This is not always the case
• For example, the linearization of both
7.21
Lyapunov functions
• In linear systems stability characterized in two ways
– Eigenvalues of matrix A (Theorems 3.1, 3.2), or poles of the
transfer function (p.5.19)
– Existence of decreasing energy-‐‑like function (Theorem 4.1)
• First applies to nonlinear systems, how about second?
• Properties of energy-‐‑like function for linear systems
1 T
1. Quadratic function of the state
V (x) = x Qx
2
2. Q positive definite à V(x)>0 for all
x ≠ 0,V (0) = 0
d 1 T
3. Power also quadratic of the state
V (x) = − x Rx
dt 2
4. R = −( A Q + QA)
T
x≠0
positive definite à V(x) decreases for all
7.22
Lyapunov functions: Stability
Theorem 7.2: Assume there exists an open set S ⊆ R n
with x̂ ∈ S and a differentiable function V (i) : R n → R
1. V ( x̂) = 0
2. V (x) > 0, ∀x ∈S with x ≠ x̂
d
3. V (x(t)) ≤ 0, ∀x ∈S
dt
Then the equilibrium x̂ is stable
• Called Lyapunov second or Lyapunov direct method
• Function V(x) known as Lyapunov function
• Derivative along trajectories known as Lie derivative
d n
∂V d n
∂V
V (x(t)) = ∑ (x(t)) xi (t) = ∑ (x(t)) f i (x(t)) = ∇V (x(t)) f (x(t))
dt i=1 ∂xi dt i=1 ∂xi
7.23
“Proof”: By picture!
{
Sc = x ∈S |V (x) ≤ c }
x̂
S δ
7.24
Example: Pendulum for d=0
• Recall that linearization could not determine the
xˆ = (0, 0)
stability of when d=0
• Consider the energy
1
( ) ( )
2
V (x) = m lθ + mgl 1− cos(θ )
2
1 θ l
(
= ml 2 x22 + mgl 1− cos(x1 )
2
)
m
mg
7.25
Example: Pendulum for d=0
V (x)
3
2.5
1.5
0.5
Take check
S = (−π , π ) × R 0
4
theorem conditions
2 1
0.5
0
−2
x1 0
x2
1. V
(0) = 0
−0.5
−4 −1
7.26
Lyapunov functions: Asymptotic stability
Theorem 7.3: Assume there exists an open set S ⊆ R n
with x̂ ∈S and a differentiable function V (i) : R n → R
1. V ( x̂) = 0
2. V (x) > 0, ∀x ∈S with x ≠ x̂
d
3. V (x(t)) < 0, ∀x ∈S with x ≠ x̂
dt
Then the equilibrium x̂ is locally asymptotically stable.
If S = R n then it is globally asymptotically stable.
• Lyapunov functions can help estimate domain of
aeraction. If we can find c > 0 such that
{
x ∈R n |V (x) ≤ c ⊆ S}
then trajectories that start in this set stay in it and
converge to
x̂
7.27
Examples
• Consider first
= f (x(t)) = −x(t)3 where x̂ = 0
x(t)
• Let
S = R, V (x) = x 2
∂
• Clearly
V (0) = 0,V (x) > 0 ∀x ≠ 0, V (x) f (x) = −2x 4 < 0 ∀x ≠ 0
∂x
• Therefore 0 is globally asymptotically stable
• How about pendulum with d > 0
• As before consider and V(x) the energy
S = (−π , π ) × R
d
V (x(t)) = ml 2 x2 (t) x2 (t) + mgl sin(x1 (t)) x1 (t) = −dl 2 x2 (t) 2 ≤ 0
dt
x̂ = (0,0)
• But =0 whenever x2(t)=0 (not only at ),
therefore cannot conclude local asymptotic stability
7.28
La Salle’s Theorem
Theorem 7.4: Assume there exists a compact invariant
set S ⊆ R n and a differentiable function V (i) : R n → R
such that
∇V (x) f (x) ≤ 0 ∀x ∈S
Let M be the largest invariant set contained in the set
{
S = x ∈S ∇V (x) f (x) = 0 ⊆ R n }
Then all trajectories starting in S tend to M as t → ∞
7.29
Energy when pendulum
Pendulum with d > 0
stopped upside down
{
• Take V(x) the energy and
S = x ∈R 2 |V (x) ≤ 2mgl − ε }
for any ε > 0
3
Exercise: Show
2
x2 0
−1
−2
−3
• Recall that
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
x1
⎧⎪ ≤ 0 ∀x ∈S
∇V (x) f (x) = −dl x ⎨
2 2
⎪⎩ = 0 when x2 = 0
2
7.30
Pendulum with d > 0
• Therefore
{
S = x ∈S | x2 = 0 }
• is the only invariant set contained in
x̂ = (0,0) S,
since x2 ≠ 0 if x2 = 0 but x1 ≠ 0
xˆ = (0, 0)
• Therefore all trajectories that start in S tend to
xˆ = (0, 0)
• By Theorem 7.2, is stable
• Hence, by Theorem 7.4, locally asymptotically stable
• Moreover, since ε is arbitrary, the domain of
aeraction of (0,0) contains everything except the
other equilibrium
(π , 0)
7.31
General comments
• Theorem 7.4 applies to more general invariant sets
(e.g. limit cycles)
• Theorems 7.2 and 7.3 also generalize easily
• Theorem 7.1 slightly harder to generalize
(linearization about trajectories, Poincare maps)
• Conditions of Theorems 7.2-‐‑7.4 sufficient and not
necessary
• Finding Lyapunov functions for nonlinear systems an
art not a science. Common choices
– Energy for mechanical and electrical systems
– Quadratics (always work for linear systems)
– Intuition!
7.32