Sei sulla pagina 1di 284

Signal-­‐‑  und  Systemtheorie  II  

D-­‐‑ITET,  Semester  4  
 
Notes  0:  Introduction

John Lygeros

Automatic  Control  Laboratory,  ETH  Zürich


www.control.ethz.ch
Where,  when  &  what?
•  Lectures
–  Thursdays,  8:00-­‐‑10:00,  ETF  E1  (and  today)
•  Examples  classes:  
–  Mondays,  13:00-­‐‑15:00,  ETF  C1  and  ETF  E1
•  Assessment
–  WriLen  examination
–  Optional  mid-­‐‑terms:  April  4  and  May  9
–  Mid  term  grade  15%+15%,  only  together,  only  positive
–  Usual  exemptions  (military,  illness,  with  certificate)  apply
–  Possibly  unconventional  format

0.2
Where,  when  &  what?
•  Exercises
–  Examples  papers,  ~  4  exercises  on  the  topic  of  the  week
–  Discussed  in  examples  classes
–  In  addition,  exercises  in  lecture  notes
–  Neither  part  of  class  credit
–  Integral  part  of  the  learning  experience  nonetheless
–  Example  paper  exercises  in  style  of  final  exam  questions
–  Please  try  to  do  them  and  discuss  with  instructor  and  
assistants  if  you  have  questions
–  Please  aLend  examples  classes
–  Feel  free  to  submit  your  solutions  for  grading

0.3
Reading  material
•  Lecture  notes
–  Slides  handout,  available  on  class  webpage  &  Moodle
–  Blackboard  notes
•  Recommended  book
–  K.J.  Åström  and  R.M.  Murray:  “Feedback  Systems:  An  Introduction  for  
Scientists  and  Engineers”,  Princeton  U.P.,  2008
–  hLp://www.cds.caltech.edu/~murray/amwiki/index.php/Main_Page
•  Other  excellent  books
–  G.F.  Franklin,  J.D.  Powell,  and  A.  Emami-­‐‑Naeini,  “Feedback  control  of  
dynamical  systems”,  Prentice-­‐‑Hall,  2006  (also  used  in  Regesysteme  I/II)
–  J.  Hespanha,  “Linear  Systems  Theory”,  Princeton  U.P.,  2009
–  T.  Kailath,  “Linear  systems”,  Prentice-­‐‑Hall,  1980
–  E.A.  Lee  and  P.  Varaiya,  “Structure  and  interpretation  of  signals  and  
systems”,  Addison-­‐‑Wesley,  2002

0.4
The  flipped  classroom  concept
•  ETH  TORQUE  pilot  course  in  2014
–  Tiny,  Open-­‐‑with-­‐‑Restrictions  courses  focused  on  QUality  and  
Effectiveness
–  “Flipped  classroom”  concept
–  Use  of  web  and  mobile  technology  before,  during,  and  after  lecture

•  Available  sources  of  online  content,  different  purposes


–  ETH  learning  management  tool  (Moodle)
•  Our  weekly  off-­‐‑line  preparation  tool  for  the  class
–  ETH  mobile  application  (EduApp)
•  Real  time  Q&A  in  the  classroom
–  Experimental  adaptive  learning  platform  (Albie)
•  Exam  preparation  and  interactive  learning  (trust  Albie)
–  Class  webpage
•  Repository  with  links  to  all  of  the  above

0.5
Moodle:  Learning  management
•  Official  website  for  the  Signals  and  Systems  II
–  hLps://moodle-­‐‑app2.let.ethz.ch  

•  Log  in  using  your  ETH  account  and  register  for  the  Signals  
and  Systems  II
•  What  you  will  find:
–  Short  video  tutorials  on  course  material
–  Quizzes  designed  to  test  your  understanding  of  course  material
–  Exercises  of  examples  papers
–  Forum  to  interact  and  ask  questions  about  the  course  material
•  How  it  will  be  used:
–  Videos  and  quizzes  will  be  assigned  before  the  lectures
–  The  lecture  will  build  on  top  of  these  assignments  by  adding  more  in  
depth  material  in  a  (hopefully)  flipped  classroom  atmosphere

0.6
EduApp:  Interactive  lectures
•  EduApp  can  be  found  at
–  hLp://www.eduapp.ethz.ch

•  Install  the  app  on  your  iPhone  or  Android  mobile  phone
•  Log  in  using  your  ETH  account  –  you  should  automatically  
see  the  SSII  course  if  you  are  registered.
•  What  you  will  find:
–  An  interactive  platform  that  can  be  used  during  the  lecture
•  How  it  will  be  used:
–  Questions  will  be  posed  during  some  lectures  and  example  sessions  
and  students  will  be  asked  to  contribute  answers
–  Back  channel  available  where  students  can  ask  questions  
anonymously

0.7
Albie  (Optional,  self  paced  study)
•  The  experimental  platform  Albie  can  be  found  at
–  hLp://www.albie.co     (Yes,  that  is  .co  NOT  .com)

•  Register  using  an  ETH  Zurich  email  address  (must  end  in  
ethz.ch).  After  logging  in  for  the  first  time,  go  to  search  and  
join  the  Signals  and  Systems  course
•  What  you  will  find:
–  An  experimental  adaptive  learning  platform
•  How  it  will  be  used:
–  Optional,  not  used  in  assignments  during  the  semester
–  Personalized,  “non-­‐‑linear”  content  sequence
–  Last  year  many  students  used  during  their  exam  preparation
–  Search  for  content  or  “trust  Albie”  to  tell  you  what  to  look  at  next
–  More:  Learning  statistics  and  comments  aLached  to  content

0.8
It’s  all  for  a  good  cause!
•  Please  try  to  make  the  most  of  it
–  Watch  the  videos,  do  the  quizzes,  come  prepared
–  Actively  participate  in  the  class,  work  on  exercises,  
answer  questions
–  ALend  the  examples  classes  where  exam  style  questions  
will  be  answered
–  Provide  feedback:  What  works,  what  does  not

•  If  it  all  gets  too  much,  play  the  SygSys  game!


–  hLp://www.sigsystext.com/

0.9
Class  content:  Dynamical  systems
•  Describe  evolution  of  variables  over  time
–  Input  variables
SS1
–  Output  variables
–  State  variables SS2

•  Control:
–  Steer  systems  using  inputs RS1
–  Feedback

0.10
From  signals  to  systems
SS1: System maps input
Input System Output
signals to output
signals
D
SS2: Where does
+ x(t) +
Input
u(t)
B
+
C
+ y(t) Output input-output map
A come from?

System 1 RS1: What happens when


we connect system
System 2 inputs and outputs?
0.11
Dynamical  systems
•  Describe  evolution  of  variables  over  time
–  Input  variables
–  Output  variables
–  State  variables
•  What  is  a  “state”?
•  What  values  can  it  take?
•  What  is  “time”?
•  What  values  can  it  take?
•  What  is  “evolution”?
•  How  can  evolution  be  described?

0.12
Discrete  vs  continuous
•  Discrete  à  Finite  (or  countable)  values
–  {0,  1,  2,  3,  …}
–  {a,  b,  c,  d}
–  {ON,  OFF},  {hot,  warm,  cool,  cold},  …
•  Continuous  à  Real  values
x ∈R, x ∈R n
–   
x ∈[−1,1] ⇒ x ∈R,−1 ≤ x ≤ 1
–   
  1 , x2 ) ∈R 2 | x12 + x22 ≤ 1}
–  {(x
•  Hybrid  à  Part  discrete  and  part  continuous
–  Airplane  +  flight  management  system
–  Thermostat  +  room  temperature

0.13
System  classification  (examples)
Time
Discrete Continuous Hybrid
State

Finite  state  
Queuing  
Discrete machines,  
Turing  machines systems

z-­‐‑transform Laplace   Impulse  


transform
Continuous xk +1 = Axk + Buk  = Ax(t) + Bu(t)
x(t) differential  
yk = Cxk + Duk y(t) = Cx(t) + Du(t) inclusions
Mixed  Logic-­‐‑
Switching   Hybrid  
Hybrid Dynamical  
systems diffusions automata

0.14
In  this  course
•  We  will  concentrate  mostly  on
–  Continuous  state
–  Continuous  time
–  Linear  systems
•  We  will  also  establish  a  connection  to
–  Continuous  state
–  Discrete  time
–  Linear  systems
and  to
–  Continuous  state
–  Continuous  time
–  Nonlinear  systems
•  Start  with  examples  from  many  classes  of  systems

0.15
Course  outline:  Introductory  material
1.  Modeling
–  Mechanical  and  electrical  systems
–  Discrete  and  continuous  time  systems
–  Discrete  and  continuous  state  systems
–  Linear  and  nonlinear  (continuous  state)  systems
2.  Revision:  ODE  and  linear  algebra
–  ODE  =  Ordinary  Differential  equations
–  Existence  and  uniqueness  of  solutions
–  Range  and  null  spaces  of  matrices
–  Eigenvalues,  eigenvectors,  …  

0.16
Course  outline:  Continuous  time  LTI
3.  Time  domain
–  LTI  =  Linear  Time  Invariant
–  State  space  equations
–  Time  domain  solution  of  state  space  equations
4.  Controllability,  observability,  energy
5.  “Frequency  domain”
–  Revision  of  Laplace  transforms
–  Laplace  solution  of  state  space  equations
–  Stability
–  Bode  and  Nyquist  plots

0.17
Course  outline:  Discrete  time  LTI  and  
advanced  topics
6.  Discrete  time  LTI  systems
–  Sampled  data  systems
–  Linear  difference  equations
–  Controllability  and  observability
–  z-­‐‑transform
–  Simulation,  Euler  method  and  its  stability
7.  Nonlinear  systems
–  Differences  from  linear  systems
–  Multiple  equilibria,  limit  cycles,  chaos
–  Linearization
–  Stability
–  Examples
0.18
Notation
•         denotes  the  integers.  This  is  a  discrete  set
Z
        Z = {,−2,−1,0,1,2,}
•         denotes  the  natural  numbers
N N = {0,1,2,}
•         denotes  the  complex  numbers
C

s = s1 + js2 ∈C | s |= s12 + s22 Im[ s ]


s2 s
−1 s2
Im[s] = s2 ∠s = tan |s|
s1
Re[ s] = s1
s =| s | e j∠s ∠s
s1 Re[ s ]
Belongs to e = cos(θ ) + j sin(θ )

0.19
Notation
•           
R n denotes  Euclidean  space  of  dimension  n.  It  is  a  
finite  dimensional  vector  space  (sometimes  called  
linear  space).  Special  cases:
x ∈R
–  n=1,  real  line,                        (drop  the  superscript)
–  n=2,  real  plane,
⎡ x ⎤
x = (x1 , x2 ) = ⎢ 1 ⎥ ∈R 2
⎢ x2 ⎥ ⎡
⎣ ⎦ x1 ⎤
⎢ ⎥
•  General  n,  write  x  as   ⎢ x2 ⎥
ordered  list  of   x = (x1 , x2 ,, xn ) = ⎢ ⎥ ∈R n

⎢  ⎥
numbers,  or  vector
⎢ xn ⎥
⎣ ⎦
0.20
Notation
•                 
R n×m matrices  with  n  rows  and  m  columns,  whose  
elements  are  real
⎡ a11 a12  a1m ⎤
⎢ ⎥
⎢ a21 a22  a2m ⎥
A=⎢ = ⎡ a ⎤
⎥ ⎣ ij ⎦ n×m ∈R n×m

⎢     ⎥
⎢ an1 an2  anm ⎥
⎣ ⎦
•  Also  a  vector  space,  can  define  “length”,  …
R n = R n×1 ,R = R1×1
•  Special  cases  
•  Assume  familiar  with  basic  matrix  operations  
(addition,  multiplication,  eigenvalues)

0.21
x2
Notation 1

•  Definition  of  sets {x ∈R 2 | x12 + x22 ≤ 1} -1 1 x1


•  Special  case:  Intervals  for a,b ∈R,a < b -1

[a,b] = {x ∈R | a ≤ x ≤ b} (a,b) = {x ∈R | a < x < b}


[a,b) = {x ∈R | a ≤ x < b} (a,b] = {x ∈R | a < x ≤ b}
(a,∞) = {x ∈R | a < x} (−∞,b] = {x ∈R | x ≤ b}
R + = [0,∞)

•           means  “for  all”

•           means  “there  exists”  

{
Exercise: What do the sets x ∈R 2 x1 = 0 or x2 = 0 , }
{ } {
x ∈R 2 x1 ≥ x2 and y ∈R ∃x ∈R, y=x 2 } look like?
0.22
Notation
•  Continuous  time   à t ∈R +
•  Discrete  time   à k ∈N
•  Continuous  state   à x ∈R n
•  Continuous  input   à u ∈R m
•  Continuous  output   à y ∈R p

•  Discrete  state   à q ∈Q
e.g.  thermostat à q ∈Q = {ON ,OFF}

•  Lower  case  leLers:  Vectors/numbers  x, u, t, k


•  Upper  case  leLers:  Matrices,  A, B, C, …
0.23
Notation
•                                         function  returning  for                    an
f (i) : X → Y x ∈X f (x) ∈Y
x  f (x)
•  Example:  Discrete  time  input  signal
u(i) : N → R m k  u(k) = uk

Discrete Input at Shorthand


time time k notation
•  Example:  Continuous  time  state  signal
x(i) : R + → R n t  x(t)

Continuous State at
time time t
0.24
Linear  functions:  Euclidean  space  
•  Special  case:  Linear  function f (i) : R n → R m
x1 , x2 ∈R n , a1 ,a2 ∈R
•  For  any  

f (a1 x1 + a2 x2 ) = a1 f (x1 ) + a2 f (x2 )


A ∈R m×n ,
•  Multiplication  by  a  matrix f (x) = Ax
f (a1 x1 + a2 x2 ) = A(a1 x1 + a2 x2 ) = a1 Ax1 + a2 Ax2 = a1 f (x1 ) + a2 f (x2 )
•  All  linear  functions  on  finite  dimensional  vector  
spaces  can  be  wriLen  in  this  way
Exercise: Show that if f (i) : R n → R m , g(i) : R m → R p are
linear functions, then their composition g( f (•)) : R n → R p
is also linear. If f and g are defined in terms of matrices
A ∈R m×n , B ∈R p×m what does this composition correspond to?
0.25
Linear  functions:  Function  spaces
•  Linear  functions  defined  more  generally  for  
vector  (linear)  spaces
•  For  example,  for u1 (i),u2 (i) : R → R, a1 ,a2 ∈R
define                                                                              by
(a1u1 + a2u2 )(i) : R → R

(a1u1 + a2u2 )(t) = a1u1 (t) + a2u2 (t) ∀t ∈R

•  Likewise,  for U1 (i),U 2 (i) : C → C, a1 ,a2 ∈C
define                                                                              by
(a1U1 + a2U 2 )(i) : C → C
(a U + a U )(s) = a U (s) + a U (s) ∀s ∈C
1 1 2 2 1 1 2 2

0.26
Laplace  transform  and  convolution
•  Given u(i) : R → R
•  Laplace  transform U (i) : C → C


− st
U (s) = u(t)e dt
−∞

•  Convolution  of  u  with  fixed  function  


h(i) : R → R

(u ∗ h)(i) : R → R (u * h)(t) = ∫ u(τ )h(t − τ ) d τ
−∞

Exercise: Show that the Laplace transform and


the convolution are linear functions of u(.)

0.27
Subtle  points
•  In  SSII  interested  in  system  response  for  
positive  times t ∈R +
•  Implicitly  assume  all  signals  =  0  for t < 0
•  Hence  Laplace  transform  simplifies  to
∞ ∞

∫ ∫
− st − st
U (s) = u(t)e dt = u(t)e dt
−∞ 0
t<0
(since  u(t)=0  for                  )
•  And  convolution  simplifies  to
∞ t


(u * h)(t) =
−∞
∫ u(τ )h(t − τ ) d τ = ∫ u(τ )h(t − τ ) d τ 0
u(τ ) = 0 for τ < 0 and h(t − τ ) = 0 for τ > t
(since                                                                                                                                            )
0.28
Signal-­‐‑  und  Systemtheorie  II  
D-­‐‑ITET,  Semester  4  
 
Notes  1:  Modeling

John Lygeros

Automatic  Control  Laboratory,  ETH  Zürich


www.control.ethz.ch
Series  of  examples

1.  Pendulum:  Continuous  time,  continuous  state,  
nonlinear  autonomous  system

2.  RLC  circuit:  Continuous  time,  continuous  state  


linear  system  with  inputs
3.  Amplifier  circuit:  Continuous  time,  continuous  state  
linear  system  with  inputs  and  outputs

4.  Population  dynamics:  Discrete  time,  continuous  


state  nonlinear  system
5.  Manufacturing  machine:  Discrete  time,  discrete  
state  system
6.  Thermostat:  Continuous  time,  hybrid  state  system

1.2
Example  1:  Pendulum
•  A  continuous  time,  continuous  
state,  autonomous,  
nonlinear  system
•  Mass  m  hanging  from
weightless  string  of  length  l θ
l
θ
•  String  makes  angle        with
downward  vertical
•  Friction  with  dissipation m
constant  d mg
•  Determine  how  the  pendulum  
is  going  to  move
•  i.e.  assuming  we  know  where  the  pendulum  is  at  
θ0
“time”  t=0  (θ0)  and  how  fast  it  is  moving  (      )  
determine  where  it  will  be  at  time  t  (θ(t))
1.3
Pendulum:  Equations  of  motion
•  Evolution  of  q governed  by  Newton’s  law
mlθ(t) = −dlθ(t) − mg sin θ (t) Exercise:  Derive  the  
differential  equation  from  
Angular Friction Gravity Newton’s  laws  of  motion
acceleration force force

•  Need  to  solve  Newton’s  differential  equation


•  i.e.  find  a  function
θ (•) : R + → R

Time Angle
such  that
θ (0) = θ 0 θ(0) = θ0
∀t ∈R + , mlθ(t) = −dlθ(t) − mg sin[θ (t)]
1.4
Pendulum:  Existence  and  uniqueness
•  Such  a  function  is  known  as  a  “solution”  or  a  
“trajectory”  of  the  system
θ 0 , θ0
1. Does  a  trajectory  exist  for  all                    ?
θ 0 , θ0
2. Is  there  a  unique  trajectory  for  each                    ?
3. Can  we  find  this  trajectory?
•  Clearly  important  questions  for  differential  
equations  used  to  model  physical  systems
•  In  general  answer  to  questions  may  be  “no”
•  In  fact,  answer  to  question  3  usually  is  “no”!
•  However,  we  can  usually  approximate  the  
trajectory  by  computer  simulation

1.5
Pendulum:  MATLAB  simulation
l = 1, m = 1, d = 1, g = 9.8, θ 0 = 0.75, θ0 = 0

function [xprime] = pendulum(t,x) θ(t) θ (t)


xprime=[0; 0];
l = 1;
m=1;
d=1;
g=9.8;
xprime(1) = x(2);
xprime(2) = -sin(x(1))*g/l-x(2)*d/m;

>> x=[0.75 0];


>> [T,X]=ode45(’pendulum’, [0 10], x’);
>> plot(T,X);
>> grid;

1.6
Pendulum:  State  space  description
•  Convenient  to  write  ODE  more  compactly
 = f (x(t)), x(t) ∈R n
x(t)

•  For  the  pendulum,  let Exercise:  A  different  f(x(t))  


⎡ x (t) ⎤ would  be  obtained  if  x1(t)  and  
x(t) = ⎢ 1 ⎥ ∈R 2 with x2(t)  are  selected  differently.  
⎢ x2 (t) ⎥
⎣ ⎦ Derive  f(x(t))  for  

x1 (t) = θ 3 (t) + θ(t), x2 (t) = θ(t)

x1 (t) = θ (t), x2 (t) = θ(t)
•  Then ⎡ ⎤
x2 (t)
⎢ ⎥
 =⎢ d
x(t) g ⎥ = f (x(t))
⎢ − m x2 (t) − l sin x1 (t) ⎥
⎣ ⎦
1.7
Pendulum:  State  space  description
2
x(t) ∈R
•  This  first  order  ODE  for                                  describes  exactly  
the  same  “dynamics”  as  second  order  ODE  for θ (t) ∈R
x(t) ∈R 2
•  Vector                                called  the  state  of  the  system
•  Function                                                      called  the  vector  field
f (•) : R 2 → R 2
•  For  the  pendulum  f  is  a  nonlinear  function  of  x
•  Solving  Newton’s  equation  is  equivalent  to  finding  a  
function x(•) : R + → R 2 Angle & angular
Time velocity
such  that
⎡ θ ⎤
x(0) = ⎢ 0 ⎥
⎢ θ0 ⎥
⎣ ⎦
∀t ∈R + , x(t)
 = f (x(t))
1.8
Pendulum:  Vector  field  &  phase  plane
Vector field f(x) Trajectory x2(t) vs. x1(t)
x2 x2 (t)

t =0

x1 x1 (t)
1.9
Example  2:  RLC  circuit
•  Continuous  time,  continuous  state,  linear  system
•  Input  voltage  v1(t)  (not  autonomous)
•  Determine  evolution  of  voltages  and  currents

vR (t) v L (t)
+ - + - iL (t)
+ R L
+
v1 (t) C vC (t)
-
-

1.10
RLC  circuit:  Equations  of  “motion”
•  From  Kirchoff’s  laws  +  element  equations
•  E.g.
dvC (t)
C = iL (t)
dt
diL (t) d 2 vc (t) R dvc (t) 1 1
L = v L (t) + + vc (t) = v1 (t)
dt dt 2 L dt LC LC
vR (t) = RiL (t)
v L (t) = v1 (t) − vR (t) − vC (t)
•  Solution  to  ODE  gives  vC(t)
•  All  other  voltages  and  currents  can  be  
computed  from  vC(t)
1.11
RLC  circuit:  MATLAB  simulation  

R = 10 L =1
C = 0.01 x0 = 0

Low pass filter

Exercise:  Modify  the  


MATLAB  code  for  the  
pendulum  to  simulate  
the  RLC  circuit  and  
generate  such  plots.

1.12
RLC  circuit:  State  space  description
•  Try  to  write  first  order  vector  ODE x(t)  = f (x(t)), x(t) ∈R n

•  Based  on  our  experience  with  the  pendulum  


–  Second  order  ODE  for  vC(t)  suggests x(t) ∈R 2
–  x(t)  has  something  to  do  with  energy
(θ ) (θ )
•  Potential                  and  kinetic              in  the  pendulum
•  Stored  in  capacitor  (vC)  and  inductor  (iL)  in  circuit
⎡ x (t) ⎤
x(t) = ⎢ 1
•  Try ⎥ ∈R 2 , x1 (t) = vC (t), x2 (t) = iL (t)
⎢ x2 (t) ⎥
⎣ ⎦
(in  circuits  this  usually  works:  Voltages  across  capacitors  and  
currents  through  inductors  can  be  selected  as  the  states)
•  Input  voltage:  External  source  of  energy
u(t) = v1 (t)

1.13
RLC  circuit:  State  space  description
•  Relate  state  (x(t))  and  input  (u(t))
dvC (t) 1
C = iL (t) ⇒ x1 (t) = x2 (t)

dt C
diL (t) 1 R 1
L = v1 (t) − vR (t) − vC (t) ⇒ x2 (t) = u(t) − x2 (t) − x1 (t)

dt L L L
•  In  matrix  form
⎡ 1 ⎤
⎢ 0 ⎥ ⎡ 0 ⎤

x(t)= ⎢ C ⎥ x(t) + ⎢ 1 ⎥ u(t)
⎢ −1 −R ⎥ ⎢ ⎥
⎢ ⎥ ⎢⎣ L ⎥⎦
⎣ L L ⎦

1.14
RLC  circuit:  State  space  description
•  Have  wriben  ODE  of  the  form
Exercise:  What  

x(t)=Ax(t) + Bu(t) = f (x(t),u(t)) are  the  matrices  
A  and  B?
•  Similarities  to  pendulum
–  2nd  order  ODE  à  two  states
–  States  related  to  energy  stored  in  system
•  Differences  from  pendulum
–  External  source  of  energy  à  input  u(t)  
à  system  no  longer  “autonomous”
–  Function  f(x,u)  linear  in  x  and  u  
à  dynamics  described  by  linear  ODE

1.15
Example  3:  Amplifier  circuit
•  Continuous  time,  continuous  state  linear  system
•  Input  voltage  v1(t) vC (t)
•  Output  voltage  v0(t) iC0 (t) + -
0

C0
iR (t)
0

vC (t) R0
i1 R1
+
1
- iin
-
+ +
C1 vin
-
+
iout +

v1 (t)
v0 (t)
-
-

1.16
Reminder:  Operational  amplifier  (OpAmp)
Input Input
current Resistance Output
iin (large) resistance
à- (small)
Input + Rout
voltage Rin Output
- iout + current
vin +
µ vin Output
+ vout voltage
- -
Gain
(large)
External  voltage  source  (not  shown)  provides  
energy  for  gain
1.17
Reminder:  Ideal  OpAmp
•  Assume

Rin ≈ ∞ iin ≈ 0
Rout ≈ 0 iout independent of vout
µ≈∞ vin ≈ 0 if vout finite
•  “Virtual  earth  assumption”
•  Makes  circuit  analysis  much  easier
•  Note  that  
–  Input  power  iinvin=0
–  Output  power  ioutvout  is  arbitrary
•  Necessary  energy  comes  from  external  voltage  source  
(not  shown!)

1.18
Amplifier  circuit:  Equations  of  motion
•  Assuming  OpAmp  is  ideal

dvC (t)
C1 1
= i1 (t) dvC (t) vC (t)
v1 (t)
dt 1
=− 1
+
v1 (t) − vC (t) dt R1C1 R1C1
vin ≈ 0 ⇒ i1 (t) = 1

R1
dvc (t)
C0 0
= ic (t) dvC (t) vC (t) vC (t)
v1 (t)
dt 0 0
=− −0
+ 1

dt R0C0 R1C0 R1C0


iin ≈ 0 ⇒ i1 (t) = iC (t) + iR (t)
0 0

1 vin ≈ 0 ⇒ v0 (t) = −vC (t)


iR (t) = vC (t)
0
R0 0 0

1.19
Amplifier  circuit:  State  space  description
•  First  order  ODE  in  vector  variables
•  From  our  experience  so  far  we  would  expect
–  Two  state  variables
–  Voltages  of  capacitors,  x1(t)=vC1(t),  x2(t)=vC0(t)
–  One  input  variable,  u(t)=v1(t)
–  One  output  variable,  y(t)=v0(t)
•  Write  equations  that  relate  input,  states  and  output
⎡ 1 ⎤ ⎡ 1 ⎤
⎢ − 0 ⎥ ⎢ ⎥
dx(t) ⎢ R1C1 ⎥ ⎢ R1C1 ⎥
=⎢ ⎥ x(t) + ⎢ 1 ⎥ u(t)
dt
⎢ − 1 −
1 ⎥ ⎢ ⎥
⎢ R1C0 R0C0 ⎥ ⎢ R1C0 ⎥
⎣ ⎦ ⎣ ⎦
y(t) = ⎡ 0 −1 ⎤ x(t)
⎣ ⎦
1.20
Amplifier  circuit:  State  space  description
•  Have  wriben  in  the  form


x(t)=Ax(t) + Bu(t) = f (x(t),u(t))
y(t) = Cx(t) + Du(t) = h(x(t),u(t))

•  f  and  h  are  linear  functions  of  the  state  and  inputs

Exercise:  Modify  the  matlab  


Exercise:  What  are  the  
code  given  for  the  pendulum  
matrices  A,  B,  C  and  D?  
to  simulate  the  amplifier  
What  are  their  dimensions?
circuit

1.21
Amplifier  circuit:  Simulation

Exercise: Why does


the output settle to
zero even though
input is non-zero?
1.22
Amplifier  circuit:  Energy
•  Energy  stored  in  the  system

1 1 1 ⎡ C 0 ⎤
E(t) = C1vC2 (t) + C0 vC2 (t) = x(t)T ⎢ 1 ⎥ x(t)
2 1
2 0
2 ⎢ 0 C0 ⎥
⎣ ⎦
•  Quadratic  function  of  the  state

1
E(t) = x(t)T Qx(t)
2

Exercise:  Derive  the  energy  


Exercise:  What  is  the   equations  for  the  pendulum  
matrix  Q  in  this  case? and  the  RLC  circuit.  Can  you  
find  a  matrix  Q  in  both  cases?
1.23
Amplifier  circuit:  Power
•  Power:  Instantaneous  energy  change
d 1 1
P(t) = E(t) = x(t)
 T
Qx(t) + x(t)T Qx(t)

dt 2 2
1
( ) 1
(
= x(t) A + u(t) B Qx(t) + x(t)T Q Ax(t) + Bu(t)
2
T T T T

2
)
1
2
(
T
) 1
(
= x(t) A Q + QA x(t) + u(t)T BT Qx(t) + x(t)T QBu(t)
T

2
)
•  Quadratic  in  state  and  input
•  If  there  is  no  input  (u(t)=0)
1
( )
P(t) = x(t)T AT Q + QA x(t)
2
1.24
Amplifier  circuit:  Power  (u(t)=0)
⎛⎡ 1 1 ⎤ ⎡ 1 ⎤⎞
⎜⎢ − − ⎥⎡ ⎤ ⎡ ⎤ ⎢ − 0 ⎥⎟
1 T ⎜⎢
R1C1 R1C0 ⎥ C1 0 C1 0 ⎢ R1C1 ⎥⎟
P(t) = x(t) ⎜ ⎢ ⎢ ⎥ + ⎢ ⎥ x(t)
2 ⎥
1 ⎥ ⎢ 0 C0 ⎥ ⎢ 0 C0 ⎥ ⎢ ⎢ 1 1 ⎥ ⎥ ⎟
⎜⎢ 0 − ⎣ ⎦ ⎣ ⎦ − − ⎟
⎜⎝ ⎢ R0C0 ⎥ ⎢ R1C0 R0C0 ⎥⎟⎠
⎣ ⎦ ⎣ ⎦
⎡ 2 1 ⎤
⎢ − − ⎥ Exercise:  Derive  this  
1 T ⎢
R1 R1 ⎥
= x(t) ⎢ ⎥ x(t) equation  directly  by  
2
⎢ −1 − 2 ⎥ differentiating  the  energy  
⎢ R1 R0 ⎥
⎣ ⎦ of  the  circuit
x1 (t) 2 x1 (t)x2 (t) x2 (t) 2 1 1
⇒ P(t) = − − − E(t) = C1vC (t) + C0 vC2 (t)
2
R1 R1 R0 2 1
2 0

Exercise:  Repeat  for  the  RLC  circuit  and  pendulum

1.25
Population  dynamics
•  A  discrete  time,  continuous  state  system
•  Experiment:  
–  Closed  jar  containing  a  number  (N)  of  fruit  flies
–  Constant  food  supply
•  Question:  How  does  fly  population  evolve?
•  Number  of  flies  limited  by  available  food,  epidemics
–  Few  flies  à  abundance  of  space/food  à  more  flies
–  Many  flies  à  competition  for  space/food  à  fewer  flies
•  Maximum  number  “ecosystem”  can  support  NMAX
•  State:  Normalized  population
N
x= ∈ [0,1]
N MAX
1.26
Population  dynamics:  State  space  model
•  Track  x  from  generation  to  generation:  xk  population  at  
generation  k
•  How  does  population  at  generation  k+1  depend  on  xk?
•  Classic  model:  Logistic  map

xk +1 = rxk (1− xk ) = f (xk ) Exercise:  Is the function


f(x) linear or non-linear?

Exercise:  Show that if r ∈[0,4]


and x0 ∈[0,1] then xk ∈[0,1]
for all k=0, 1, 2, …

1.27
Population  dynamics:  Solution
•  r  represents  the  “food”  supply
–  Large  r  means  a  lot  of  food  is  provided
–  Small  r  means  lible  food  is  provided
•  Shape  of  f(x)  reflects  population  trends
–  Small  population  now  à  small  population  next  
generation  (not  enough  individuals  to  breed)
–  Large  population  now  à  small  population  next  
generation  (food/living  space  shortage,  epidemics,  etc.)
•  How  does  the  population  change  in  time?
•  This  depends  a  lot  on  r
xk
1.  If                              then            decays  to  0  (i.e.  all  flies  die)
r ∈[0,1)
xk
2.  If                                then            tends  to  a  steady  state  value  (i.e.  the  
r ∈[1,3)
fly  population  stabilizes)
3.  If                                                then              tends  to  a  2-­‐‑
r ∈[3,1+ 6) xk periodic  solution  
(i.e.  the  population  alternates  between  two  values)
4.  Eventually  chaotic  behavior!
1.28
Population  dynamics:  Simulation

1.29
Manufacturing  system
•  A  discrete  time,  discrete  state  system
•  Model  of  a  machine  in  a  manufacturing  shop
•  Machine  can  be  in  three  states
–  Idle  (I)
–  Working  (W)
–  Broken  (D)
•  State  changes  when  certain  “events”  happen
–  A  part  arrives  and  starts  gebing  processed  (p)
–  The  processing  is  completed  and  the  part  moves  on  to  the  
next  machine  (c)
–  The  machine  fails  (f)
–  The  machine  is  repaired  (r)
•  Finite  number  of  states  and  inputs:  
–  Finite  State  machine  or  
–  Finite  Automaton

1.30
Manufacturing  system:  State  space  model
•  Possible  states  of  the  machine
q ∈Q = {I,W , D}

•  Possible  inputs  (events)
σ ∈Σ = { p,c, f ,r}

•  Not  all  events  are  possible  for  all  states,  e.g.
–  A  part  cannot  start  gebing  processed  (σ=p)  while  the  
machine  is  broken  (q=D)
–  The  machine  can  only    be  repaired  (σ=r)  when  broken  (q=D)
•  Transition  function δ : Q × Σ → Q
Exercise:  Is δ
•  Write  as  discrete  time  system linear or nonlinear?
qk +1 = δ (qk ,σ k ) Does the question
even make sense?

1.31
Manufacturing  system:  Automaton
δ (I, p) = W
Exercise:  Q  has  n  elements  
δ (W ,c) = I and  Σ  has  m  elements,  how  
δ (W , f ) = D many  lines  are  needed  (at  
most)  to  define  δ?
δ (D,r) = I
•  All other combinations
Initial
not allowed I state
•  Assume we start at I
•  Easier to represent by a graph p r
Exercise:  If  graph  has  e   c
arrows,  how  many  lines   W D
are  needed  to  define  δ? f
1.32
Manufacturing  system:  Solution  
•  Assume  initially  q0=I.  What  are  the  sequences  of  
events  the  machine  can  experience?
•  Some  sequences  are  possible  while  others  are  not
–  pcp  → possible
–  ppc  → impossible
•  The  set  of  all  acceptable  sequences  is  called  the  
language  of  the  automaton
•  The  following  are  OK Arbitrary no. Nothing
–  Arbitrary  number  of  pc of repetitions
denoted  by  (pc)*
–  Arbitrary  number  of  pfr ( p ⋅ c + p ⋅ f ⋅ r)* ⋅ (1+ p + p ⋅ f )
denoted  by  (pfr)*
–  Possibly  followed  by  
a  p  or  pf OR Followed by

1.33
Thermostat
•  A  continuous  time,  hybrid  system
•  Thermostat  is  trying  to  keep  the  temperature  of  a  
room  at  around  20  degrees
–  Turn  heater  on  if  temperature  ≤  19  degrees.  
–  Turn  heater  off  if  temperature  ≥  21  degrees.  
•  Due  to  uncertainty  in  the  radiator  dynamics,  the  
temperature  may  fall  further,  to  18  degrees,  or  rise  
further,  to  22  degrees
•  Both  a  continuous  and  a  discrete  state
–  Discrete  state:  Heater q(t) ∈Q = {ON ,OFF}
–  Continuous  state:  Room  temperature x(t) ∈R
•  Evolution  in  continuous  time
•  No  external  input  (autonomous  system)
1.34
Thermostat:  State  space  model
•  Different  differential  equations  
for  x,  depending  on  ON  or  OFF
–  Heater  OFF:  Temperature  decreases  
exponentially  toward  0
 = −α x(t)
x(t) Exercise:  Solve  the  
differential  equations  
–  Heater  ON:  Temperature  increases  
to  verify  exponential  
exponentially  towards  30
increase/decrease.
 = −α (x(t) − 30)
x(t)
•  Heater  changes  from  ON  to  OFF  
and  back  depending  on  x(t)
•  Natural  to  describe  by  mixture  of  differential  equation  
and  graph  notation

1.35
Thermostat:  Hybrid  automaton
Initial Can go ON
state provided that …
(OFF,20)
OFF x(t) ≤ 19 ON
x(0) = 20  = −ax(t)
x(t)  = −a(x(t) − 30)
x(t)
x(t) ≥ 18 x(t) ≤ 22
x(t) ≥ 21

While OFF x(t)


Can stay OFF changes
as long as … according to …

1.36
Thermostat:  Solutions

q(t) x(t)

ON

OFF

1.37
Continuous  modeling:  Generic  steps
1.  Identify  input  variables
–  Usually  obvious
–  Quantities  that  come  from  the  outside
–  Say  m  such  input  variables
–  u(t) ∈R m
Stack  them  in  vector  form,  denote  by  
2.  Identify  output  variables
–  Usually  obvious
–  Quantities  that  can  be  measured
–  Say  p  such  quantities
–  y(t) ∈R p
Stack  them  in  vector  form,  denote  by  

1.38
Continuous  modeling:  Generic  steps
3.  Select  state  variables
–  Need  to  encapsulate  the  past
–  Need  (together  with  inputs)  to  determine  future
–  For  physical  systems  often  related  to  energy  storage
–  For  mechanical  systems  can  usually  select  positions  
(q(t))  and  velocities  (v(t))
–  For  electrical  circuits  can  usually  select  capacitor  
voltages  (vC(t))and  inductor  currents  (iL(t))
–  Other  choices  possible,  may  lead  to  simpler  models
–  Say  n  such  variables
–  Stack  them  in  vector  form,  denote  by   x(t) ∈R n

1.39
Continuous  modeling:  Generic  steps
4.  Take  derivatives  of  states
–  Try  to  obtain  n  equations  with  derivative  of  one  
state  on  the  left  hand  side  and  a  function  of  the  
states  and  inputs  on  the  right  hand  side
–  For  mechanical  systems
•   = v(t)
q(t)
Position  derivatives  easy,  
•  Velocity  derivatives  (=accelerations)  from  Newton  law
–  For  electrical  circuits
dvC (t) diL (t)
•  Current/voltage  relations C = iC (t), L = v L (t)
dt dt
•  Relate  to  each  other  by  Kirchoff’s  laws
–   = f (x(t),u(t))
Result:  Vector  differential  equation  
x(t)

1.40
Continuous  modeling:  Generic  steps
5.  Write  output  variables  as  a  function  of  state  
and  input  variables
–  Usually  easy
–  Result:  Vector  algebraic  equation y(t) = h(x(t),u(t))
6.  Determine  whether  the  system  is  linear,  etc.
–  Are  the  functions  f  and  h  linear  or  not?

Disclaimer:
–  Generic  steps  seem  easy,  but  require  creativity!
–  Mathematical  models  never  the  same  as  reality
–  With  any  luck,  close  enough  to  be  useful
1.41
Signal-­‐‑  und  Systemtheorie  II  
D-­‐‑ITET,  Semester  4  
 
Notes  2:  Revision  of  ODE  and  
linear  algebra
John Lygeros

Automatic  Control  Laboratory,  ETH  Zürich


www.control.ethz.ch
State  space  models
•  For  the  time  being  continuous  time,  continuous  state  
models
–  Nonlinear
–  Linear
•  State  equations  are  ordinary  differential  equations
–  Reminder  of  some  tools  to  deal  with  these
•  Continuous  state  models  have  extra  structure
–  States,  inputs,  and  outputs  take  values  in  vector  spaces
•  To  exploit  structure  need  tools  from  linear  algebra
•  Discrete  time  continuous  state  models  very  similar
–  State  equations  are  difference  equations,  else  the  same
•  Discrete  state  and  hybrid  models  somewhat  different

2.2
Assumed  to  be  known
•  Matrix  product,  compatible  dimensions
–  Associative: (AB)C = A(BC)
–  Distributive  with  respect  to  addition: A(B + C) = AB + AC
–  Non-­‐‑commutative: AB ≠ BA in general
•  Transpose  of  a  matrix
–   
(AB)T = BT AT
•  For  square  matrices
–  Identity  matrix AI = IA = A
–  In  every  dimension  there  exists  a  unique  identity  matrix
−1 −1
–  Inverse  matrix A A = AA = I
–  May  not  always  exist
–  When  it  does  it  is  unique
–  Computation  of  the  determinant  and  its  basic  properties

2.3
The  2-­‐‑norm
Definition: The 2-norm is a Fact 2.1: For all x, y ∈R n
, a ∈R
function i : R n → R that to each
1.  x ≥ 0 and x = 0 if & only if x = 0
x ∈R n assigns a real number
n 2.  ax = a ⋅ x
x = ∑i
x
i=1
2
3.  x + y ≤ x + y

2 Exercise: Prove 1 and 2. Is the


Exercise: Show that x = x T
x
2-norm a linear function?
•  The  2-­‐‑norm  is  a  measure  of  “size”  or  “length”
x, y ∈R n x − y
•  Distance  between                                  is
•  The  set  of  points  that  are  closer  than  r>0  to  
x ∈R n
{ y ∈R n x− y <r }
2.4
Inner  product
Definition: The inner product is a Fact 2.2: For all x, y, z ∈R n , a,b ∈R
function i,i : R n × R n → R that
1.  x, y = y, x
takes two vectors x, y ∈R and
n

returns the real number 2. a x, y + b z, y = ax + bz, y


n 2


x, y = x y = x T y 3.  x, x = x
i i
i=1
Exercise: Prove these. For fixed
Definition: Two vectors x, y ∈R n
y, is the function y,i linear?
are called orthogonal if x, y = 0.
•  Orthogonal: Meet at right angles
A set of vectors, x1 , x2 ,…, xm ∈R n
•  Orthonormal: Pairwise orthogonal
are called orthonormal if
and unit length
⎧⎪ 0 if i ≠ j
xi , x j =⎨ Exercise: Are the vectors ⎡ 0 ⎤,⎡ 1 ⎤
⎪⎩ 1 if i = j
⎢ ⎥ ⎢ ⎥
2 ⎦ ⎣ 0 ⎦
orthogonal? Orthonormal? ⎣
2.5
Linear  independence
Definition: A set of vectors {x1 , x2 ,xm } ∈R n is called linearly
independent if for a1 ,a2 ,am ∈R
a1 x1 + a2 x2 + + am xm = 0 ⇔ a1 = a2 =  = am = 0

Otherwise they are called linearly dependent.

•  Linearly  dependent  if  and  only  if  at  least  one  is  a  
linear  combination  of  the  rest.  E.g.  if  
a1 ≠ 0
a2 am
a1 x1 + a2 x2 + + am xm = 0 ⇒ x1 = − x2 − − xm
a1 a1
Fact 2.3: There exists a set with n linearly Exercise: What is a set of
independent vectors in R n ,but any set with n linearly independent
more than n vectors is linearly dependent. vectors of R n ?
2.6
Subspaces
Definition: A set of vectors S ⊆ R n is called a subspace of R n
if for all x, y ∈S and a,b ∈R, we have that ax + by ∈S.

•  Note  that  the  set  S  is  generally  an  infinite  set


•  Examples  of  subspaces  of Rn
–  S   = R n and S = {0}
{ x ∈R n x1 = 2x2
–    }
–  {  
x ∈R }
x1 = 0 Exercise: Draw these sets
n

•  Not  subspaces for n=2. Prove that they


{ x ∈R n x1 = 1
–    } are/are not subspaces.
–  {  
x ∈R n
}
x1 = 0 or x2 = 0

2.7
Basis  of  a  subspace

{
Definition: The span of x1 , x2 ,…, xm ⊂ R is
n
} Exercise: Show span
set of all linear combinations of these vectors is a subspace
{
•  In  fact,  span  =  smallest  subspace  containing   x1 , x2 ,…, xm }
{ }
Definition: A set of vectors x1 , x2 ,…, xm ⊂ R is called a
n

basis for a subspace S ⊆ R n if


1. { x1 , x2 ,…, xm } are linearly independent
2. S = span { x1 , x2 ,…, xm }
In this case, m is called the dimension of S.
Rn
•  All  subspaces  of              have  bases.  The  number  of  vectors,  m,  in  all  
bases  of  a  given  subspace  is  the  same.  By  Fact  2.3,  n  ≥  m
•  Basis  of  subspace  is  not  unique
•  Different  bases  related  through  “change  of  coordinates”
2.8
Range  space  of  a  matrix
Definition: The range space of a matrix A ∈R n×m is the set
{
range(A) = y ∈R n | ∃ x ∈R m , y = Ax }
Fact 2.4: range(A) is a subspace of R n Exercise: Prove Fact 2.4
⎡ a ⎤
Definition: The rank of a ⎢ 1i ⎥
matrix A ∈R n×m is the A = ⎡⎣ a1 a2 am ⎤⎦ , where ai = ⎢  ⎥
⎢ a ⎥
dimension of range(A). ⎢⎣ ni ⎥⎦

Fact 2.5: range(A) = span {a1 ,a2 ,…,am }.


rank(A)=number of linearly independent Exercise: Prove Fact 2.5
columns, a1, …, am of A.
2.9
Null  space  of  a  matrix
Definition: The null space of a matrix A ∈R n×m is the set
{
null(A) = x ∈R m | Ax = 0 }
R
Fact 2.6: null(A) is a subspace of m
m
Exercise: Prove Fact 2.6
⎡ âT ⎤
⎢ 1 ⎥
A=⎢  ⎥ , where âiT = ⎡⎣ ai1 ai 2  aim ⎤⎦
⎢ T ⎥
⎢ ân ⎥
⎣ ⎦

Fact 2.7: null(A)=set of vectors


Exercise: Prove Fact 2.7
orthogonal to the rows of A

Fact 2.8: rank(A)=number of linearly


independent rows, â1 ,…, ân of A 2.10
Inverse  of  a  square  matrix
Definition: The inverse of a matrix A ∈R n×n is a matrix A−1 ∈R n×n
A−1 A = AA−1 = I

Fact 2.9: If an inverse of A exists


Exercise: Prove Fact 2.9
then it is unique.

Definition: A matrix is called


singular if it does not have an Fact 2.10: A is invertible if and
inverse. Otherwise it is called only if det(A) ≠ 0
non-singular or invertible.

2.11
Inverse  of  a  square  matrix
If  A  is  invertible Exercise: Show that
−1 adj(A) Adjoint matrix= matrix ⎡ d −b ⎤
A = of subdeterminants −1 ⎢ ⎥
det(A) transposed
⎡ a b ⎤ ⎣ −c a ⎦
⎢ ⎥ =
⎣ c d ⎦ ad − bc

Fact 2.11: A is invertible if and only if the system of linear


equations Ax = y has a unique solution x ∈R for all y ∈R
n n

Fact 2.12: A is invertible if and Fact 2.13: A is invertible if and


only if null(A) = 0 {} only if range(A) = R
n

Exercise: Prove Facts 2.11-2.12


2.12
Systems  of  linear  equations
Ax = y A ∈R , y ∈R given
n×m n

x ∈R unknown
m

•  m=n  unique  solutions  if  and  only  if  A  invertible  (Fact  2.11)
•  If  A  singular  system  will  have  either  no  solutions,  or  infinite  
number  of  solutions
•  n>m  à  equations  >  unknowns  à  generally  no  solution
( )
−1
Fact 2.14: If A has rank m then x = A A
T
AT y is the
unique minimizer of Ax − y
•  n<m  à  unknowns  >  equations  à  generally  infinite  solutions

Fact 2.15: If A has rank n then the system has infinitely many
( )
−1
solutions. The one with the minimum norm is x = A AA
T T
y
2.13
Orthogonal  matrices
Fact 2.16: A is orthogonal if
and only if its columns are
Definition: A matrix is called ortho-normal. The product of
orthogonal if AAT = AT A = I orthogonal matrices is
orthogonal. If A is orthogonal
then Ax = x

Exercise: Prove Fact 2.16

2.14
Eigenvalues  and  eigenvectors
Definition: A (nonzero) vector w ∈C n is called an eigenvector of
a matrix A ∈R if there exists a number λ ∈C such that
n×n

Aw = λ w. The number λ is  then  called  an  eigenvalue of A

•  Eigenvalues  and  eigenvectors  are  in  general  complex  even  


if  A  is  a  real  matrix
•  An  nxn  matrix  has  n  eigenvalues  (some  may  be  repeated).  
They  are  the  solutions  of  the  characteristic  polynomial
det(λ I − A) = λ n + a1λ n−1 + a2λ n−2 + + an = 0
•  The  n  eigenvalues  of  A  are   Exercise: Show that if w is an
called  the  spectrum  of  A eigenvector then so is aw for
any a ∈C
2.15
Eigenvalues  and  eigenvectors
Theorem 2.1: (Cayley-Hamilton) Every matrix A is a solution of
its characteristic polynomial
An + a1 An−1 + a2 An−2 + + an I = 0

Exercise: Show that this implies that all


m
powers A for m=0, 1, … can be written
2 n−1
∈R
as linearxcombinations
n of I, A, A ,…, A

Fact 2.17: A is invertible if and


Exercise: Show Fact 2.17
only if all its eigenvalues are
using Fact 2.12
non-zero

2.16
Diagonalizable  matrices
⎡ λ1 0 . . 0 ⎤
⎢ ⎥
⎢ 0 λ2 . . 0 ⎥
⎢ ⎥
Awi = λi wi ⇒ A[ w1 w2 ... wn ] = [ w1 w2 ... wn ] ⎢ . . . . . ⎥
⎢ . . . . . ⎥
W ∈C n×n ⎢ ⎥
W
⎢ 0 0 . . λn ⎥
⎣ ⎦

Λ ∈C n×n
Definition: A is called
diagonalizable if W is invertible ⇒ A = W ΛW −1

Fact 2.18: If the eigenvalues of A are


Exercise: Show that Fact 2.18
distinct (λi ≠ λ j if i ≠ j) then its
implies that W is invertible
e-vectors are linearly independent

2.17
Symmetric,  positive  definite  and  
positive  semi-­‐‑definite  matrices
Definition: A matrix is called Definition: A symmetric matrix
symmetric if A = AT is called positive definite if

Fact 2.19: Symmetric matrices x T Ax > 0 for all x ≠ 0. It is called


have real eigenvalues and positive semi-definite if x T Ax ≥ 0.
orthogonal eigenvectors
Fact 2.20: A matrix is positive
Exercise: If A is symmetric then definite if and only if it has (real)
there exist U ∈R n×n orthogonal & positive e-values. It is positive
Λ ∈R n×n diagonal such that semi-definite if and only if it has
(real) non-negative e-values.
A = U ΛU T
A > 0 (resp. A ≥ 0)
We  use                                                                    as  a  
shorthand  for  A  symmetric,  
positive  (semi-­‐‑)definite  
  2.18
Singular  value  decomposition
Fact 2.21: For any A ∈R n×m
A = U ΣV T

where U ∈R n×n and V ∈R m×m are


orthogonal and Σ ∈R n×m “diagonal”
with non-negative elements

The  elements  of  Σ are  called


the  singular  values  of  A

2.19
State  Space:  Inputs,  outputs  and  states
•  Mathematical  model  of  physical  system  described  by
u1 ,u2 ,....,um ∈R
–  Input  variables  (denoted  by                                                                    )
y1 , y2 ,...., y p ∈R
–  Output  variables  (denoted  by                                                              )                                              
x1 , x2 ,...., xn ∈R
–  State  variables  (denoted  by                                                              )
•  All  inputs  states  and  outputs  are  real  numbers
•  Usually  write  more  compactly  as  vectors
⎡ u1 ⎤ ⎡ y1 ⎤ ⎡ x1 ⎤
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ Exercise:  Which  of  the  
⎢ u2 ⎥ ⎢ y2 ⎥ ⎢ x2 ⎥ examples  in  Notes  1  can  be  
u=⎢ ⎥ ∈ m
y=⎢ ⎥ ∈
p
x=⎢ ⎥ ∈
n

 ⎥ ⎢  ⎥ ⎢  ⎥ described  by  real  vectors?  



⎢ um ⎥ ⎢ yp ⎥ ⎢ xn ⎥ What  are  these  vectors?
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
Input vector Output vector State vector

•  Number of states, n, is called dimension (or order) of the system

2.20
State  space:  Dynamics
•  Dynamics  of  process  imply  relations  between  variables
–  Differential  equations  giving  evolution  of  states  as  a  function  of  
the  states,  inputs  and  possibly  time,  i.e.  we  have  functions
d
f i (i) : R × R × R + → R,
n m
xi (t) = f i (x(t),u(t),t), i = 1,,n
dt
–  Algebraic  equations  giving  the  outputs  as  a  function  of  the  
states,  inputs  and  possibly  time,  i.e.  we  have  functions

hi (i) : R n × R m × R + → R, yi (t) = hi (x(t),u(t),t), i = 1,, p

•  Equations  usually  come  from  “laws  of  nature”


–  Newton’s  laws  for  mechanical  systems
–  Electrical  laws  for  circuits
–  Energy  and  mass  balance  for  chemical  reactions

2.21
In  vector  form
•  Usually  write  more  compactly  be  defining
f (i) : R n × R m × R + → R n h(i) : R n × R m × R + → R p
⎡ f (x,u,t) ⎤ ⎡ h (x,u,t) ⎤
⎢ 1
⎥ ⎢ 1 ⎥
f (x,u,t) = ⎢  ⎥ h(x,u,t) = ⎢  ⎥
⎢ f (x,u,t) ⎥ ⎢ h (x,u,t) ⎥
⎢⎣ n ⎥⎦ ⎢⎣ p ⎥⎦
•  Then  state,  input  and  output  vectors  are  linked  by
d
x(t) = f (x(t),u(t),t) Exercise:  What  are  the  
dt functions  f  for  the  pendulum,  
y(t) = h(x(t),u(t),t) RLC  and  opamp  examples  of  
Notes  1?  What  are  the  
•  State  space  form dimensions  of  these  systems?
•  f  called  the  vector  field
2.22
Linear  and  autonomous  systems
Definition: A system in state space form Exercise:  Which  of  the  
is called time invariant if its dynamics do systems  considered  in  
not depend explicitly on time Notes  1  are  autonomous?  
 = f (x(t),u(t)), y(t) = h(x(t),u(t))
x(t) Which  are  time  invariant?

Definition: A system in state


Definition: A system in state
space form is called linear if the
space form is called autonomous
functions f and h are linear, i.e.
if it is time invariant and has no
input variables  = A(t)x(t) + B(t)u(t)
x(t)
 = f (x(t)),
x(t) y(t) = h(x(t)) y(t) = C(t)x(t) + D(t)u(t)
Exercise:  What  are  the  general  
equations  for  a  linear  time   Exercise:  Which  of  the  systems  
invariant  system? considered  in  Notes  1  are  linear?

2.23
Higher  order  differential  equations
•  Often  “laws  of  nature”  expressed  in  terms  of  higher  
order  differential  equations
–  For  example,  Newton’s  law  à  second  order  ODE
•  These  can  be  converted  to  state  space  form  by  
defining  lower  order  derivatives  (all  except  the  
highest)  as  states

Exercise:  Convert  the  following  differential  equation  of  order  r  into  


state  space  form
d r y(t) ⎛ dy(t) d r−1 y(t) ⎞
r
+ g ⎜ y(t), ,..., r−1 ⎟
=0
dt ⎝ dt dt ⎠
What  is  the  dimension  of  the  resulting  system?  Is  it  autonomous?  
Under  what  conditions  is  it  linear?

2.24
Time  invariant  systems
•  The  explicit  time  dependence  can  be  eliminated  by  
introducing  time  as  an  additional  state  with
t = 1
•  The  result  is  a  time  invariant  system  of  dimension  n+1
Exercise:  Convert  the  following  time  varying  system  
 = f (x(t),u(t),t), x ∈R n
x(t)

y(t) = h(x(t),u(t),t)
of  dimension  n  into  a  time  invariant  system  of  dimension  n+1.
Exercise:  Repeat  for  the  linear  time  varying  system  
 = A(t)x(t) + B(t)u(t), x ∈R n
x(t)

y(t) = C(t)x(t) + D(t)u(t)
Is  the  resulting  time  invariant  system  linear?
2.25
Coordinate  transformation
•  What  happens  if  we  perform  a  change  of  coordinates  for  
the  state  vector?
•  Restrict  abention  to  time  invariant  linear  systems
 = Ax(t) + Bu(t)
x(t)
y(t) = Cx(t) + Du(t)
and  linear  changes  of  coordinates
x̂(t) = Tx(t), T ∈R n×n ,det(T ) ≠ 0
•  In  the  new  coordinates  we  get  another  linear  system
̂ = TAT −1 x̂(t) + TBu(t)
x(t)
y(t) = CT −1 x̂(t) + Du(t)
•  Could  be  useful,  transformed  system  may  be  simpler
2.26
Solution  of  state  space  equations
•  Only  autonomous  systems  for  the  time  being

 = f (x(t)),
x(t) y(t) = h(x(t))
•  Non-­‐‑autonomous  systems  essentially  the  same,  
formal  mathematics  more  complicated
•  What  is  the  “solution”  of  the  system?
x(t0 ) = x0 ∈R ,
–  Where  do  we  start?  Say  
n
at time t0 ∈R
t1 ≥ t0
–  How  long  do  we  go?  Say  until  some
•  Would  like  to  find  functions

x(i) : ⎡⎣t0 ,t1 ⎤⎦ → R n , y(i) : ⎡⎣t0 ,t1 ⎤⎦ → R p


“satisfying”  system  equations  for  given  conditions

2.27
Solution  of  state  space  equations
Definition: A pair of functions x(i) : ⎡⎣t0 ,t1 ⎤⎦ → R n , y(i) : ⎡⎣t0 ,t1 ⎤⎦ → R p
is a solution of the state space system over the interval ⎡⎣t0 ,t1 ⎤⎦
starting at x0 ∈R n if
1.  x(t0 ) = x0  = f (x(t)), ∀t ∈ ⎡⎣t0 ,t1 ⎤⎦
2. x(t)
3. y(t) = h(x(t)), ∀t ∈ ⎡⎣t0 ,t1 ⎤⎦
•  Note  that  x(.)  implicitly  defines  y(.)
•  Therefore  the  difficulty  is  finding  the  solution  to  the  
differential  equation
Exercise:  Show  that  x(t)  is  a  solution  
•  Because  the  system  is over  the  interval  [0,  T]  if  and  only  if  
autonomous  the  starting x(t-­‐‑t0)  is  a  solution  over  the  interval  
[t0,  t0+T]  starting  at  the  same  initial  
time  is  also  unimportant state.

2.28
Existence  and  uniqueness  of  solutions

•  For  autonomous  systems  the  “only”  thing  we  need  to  
do  is,  given                                                                                                  find  a  function
f (i) : R n → R n , x0 ∈R n ,T ≥ 0,
x(i) : ⎡⎣0,T ⎤⎦                  such  that
→ Rn Exercise:  Does  the  
function  x(.)  have  
x(0) = x0 and x(t)  = f (x(t)), ∀t ∈ ⎡⎣0,T ⎤⎦ to  be  continuous?  

Differentiable?
•  Does  such  a  function  exist  for  some  T?
•  Is  it  unique,  or  can  there  be  more  than  one?
•  Do  such  functions  exist  for  all  T?
•  Can  we  compute  them  even  if  they  do?
•  Clearly  all  these  are  important  for  physical  models
•  Unfortunately  answer  is  sometimes  “no”

2.29
Examples
•  Illustrate  potential  problems  on  1  dimensional  systems
•  No  solutions:  Consider  the  system

⎧⎪ −1 if x(t) ≥ 0 Exercise:  Compute  the  


 = −sgn(x(t)) = ⎨
x(t) solutions  for  x0  =  1  and  x0  =  -­‐‑1.  
⎪⎩ 1 if x(t) < 0 Are  they  defined  for  all  T?

starting  at  x0=0.  The  system  has  no  solution  for  any  T      
•  Many  solutions:  Consider  the  system  = 3x(t) 2 3 , x0 = 0.
x(t)
For  any  a  ≥  0  the  following  is  a  solution  for  the  system
⎧⎪ (t − a)3 if t ≥ a Exercise:  Prove  this  is  the  
x(t) = ⎨ case.
⎪⎩ 0 if t < a
2.30
Examples
•  No  solutions  for  T  large:  Consider  the  system
 = 1+ x(t) 2 , x0 = 0.
x(t)

•  The  solution  is x(t ) = tan(t ) Exercise:  Prove  this.  


What  happens  at  t=π /2?

•  So  many  things  can  go  wrong!


•  Fortunately  many  important  systems  are  “well-­‐‑behaved”

Definition: A function f :  n →  n is called Lipschitz if


there exists λ > 0 such that for all x, x̂ ∈ n
f (x) − f ( x̂) ≤ λ x − x̂

2.31
LipschiQ  functions
•  λ  is  called  the  Lipschih Exercise:  Show  that  any  
constant  of  f other  λ’  ≥  λ  is  also  a  
•  Lipschih  functions  are Lipschih  constant  
continuous  but  not  necessarily
differentiable Exercise:  Show  that  
f(x)=|x|  (x  is  a  real  number)  
•  All  differentiable  functions is  Lipschih.  What  is  the  
with  bounded  derivatives Lipschih  constant?  Is  the  
are  Lipschih function  differentiable?
•  Linear  functions  are  Lipschih
Exercise:  Show  that  f(x)=Ax  
Exercise:  Show  that  the  f(x)   is  Lipschih.  What  is  a  
in  the  three  pathological   Lipschih  constant?  Is  the  
examples  in  p.  2.30-­‐‑2.31  are   function  differentiable?
not  Lipschih.

2.32
Existence  and  uniqueness

Theorem  2.2:  If  f  is  Lipschih  then  the  differential  equation


 = f (x(t)), with initial condition x0 ∈R n
x(t)
x(•) : [0,T ] →  n
has  a  unique  solution                                                                for  all  T  ≥  0  
and  all 0
x ∈ n
.

•  So  state  space  systems  defined  by  Lipschih  vector  
fields  are  well  behaved:  
•  They  have  unique  solutions  
•  Over  arbitrarily  long  horizons  
•  Wherever  they  start

2.33
Continuity
•  Even  if  a  unique  solution  exists,  this  does  not  mean  
we  can  find  it
•  Sometimes  we  can:  See  the  pathological  examples  
above  and  linear  systems  (Notes  3)
•  Usually  have  to  resort  to  simulation  on  computer
•  Construct  approximate  numerical  solution
•  It  helps  if  solutions  that  start  close  remain  close

Theorem  2.3:  If  f  is  Lipschih  then  the  solutions  starting  at                                          
x0 , x̂0 ∈ n t≥0
                                           are  such  that  for  all  
x(t) − x̂(t) ≤ e λt x0 − x̂0

•  Continuous  dependence  on  initial  condition


2.34
Non-­‐‑autonomous  systems
•  Formally,  we  would  expect  that  given
f (i) : R n × R m × R + → R n , x0 ∈R n ,t1 ≥ t0 ≥ 0,u(i) : ⎡⎣t0 ,t1 ⎤⎦ → R m
the  solution  would  be  a  function x(i) : ⎡⎣t0 ,t1 ⎤⎦ → R
n


x(t0 ) = x0 and x(t)  = f (x(t),u(t),t), ∀t ∈ ⎡⎣t0 ,t1 ⎤⎦

•  This  is  OK  if  f  is  continuous  in  u   Exercise:  What  goes  wrong  
and  t  and  u(.)  is  continuous  in  t in  the  case  of  discontinuity?
•  Unfortunately  discontinuous  u(.)  are  quite  common
•  Fortunately  there  is  a  fix,  but  the  math  is  harder
•  Roughly  speaking  need
–  f(x,u,t)  Lipschih  in  x,  continuous  in  u  and  t
–  u(t)  continuous  for  “almost  all”  t

2.35
Signal-­‐‑  und  Systemtheorie  II  
D-­‐‑ITET,  Semester  4  
 
Notes  3:  Continuous  LTI  
systems  in  time  domain
John Lygeros

Automatic  Control  Laboratory,  ETH  Zürich


www.control.ethz.ch
LTI  systems  in  state  space  form State
space
•  General  state  space  systems
⎡ u1 ⎤ ⎡ y1 ⎤ ⎡ x1 ⎤
d ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
x(t) = f (x(t),u(t),t) ⎢ ⎢ y2 ⎥ ⎢ x2 ⎥
dt u2 ⎥
y=⎢ ⎥ ∈ x = ⎢ ⎥ ∈
n
u=⎢ ⎥ ∈
m p

y(t) = h(x(t),u(t),t) ⎢  ⎥ ⎢  ⎥ ⎢  ⎥
⎢ um ⎥ ⎢ yp ⎥ ⎢ xn ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
Input vector Output vector State vector

•  LTI  systems  are  linear  and  time  invariant,  i.e.

 = Ax(t) + Bu(t)
x(t) A ∈ n×n B ∈ n×m
y(t) = Cx(t) + Du(t)
C ∈ p×n D ∈ p×m

3.2
Block  diagram  representation

.x(t)
+ x(t) +
B +
+ C ++
u(t) y(t)

3.3
LTI  systems  in  state  space  form
⎡ a11 a12 ... a1n ⎤
For  LTI  systems  state  space  equations ⎢ ⎥
⎢ a21 a22 ... a2n ⎥
–  n  coupled,  first  order,  linear  differential  equations
A=⎢ ⎥
–  p  linear  algebraic  equations ⎢     ⎥
⎢ an1 an2 ... ann ⎥
–  Time  invariant  coefficients ⎣ ⎦
x1 (t) = a11 x1 (t) + ... + a1n xn (t) + b11u1 (t) + ... + b1mum (t) ⎡ b ... b1m ⎤
⎢ 11

B=⎢    ⎥
x2 (t) = a21 x1 (t) + ... + a2n xn (t) + b21u1 (t) + ... + b2mum (t) ⎢ b ⎥
... b
⎢⎣ n1 nm ⎥


⎡ c ... c1n ⎤
xn (t) = an1 x1 (t) + ... + ann xn (t) + bn1u1 (t) + ... + bnmum (t) ⎢ 11

C=⎢    ⎥
y1 (t) = c11 x1 (t) + ... + c1n xn (t) + d11u1 (t) + ... + d1mum (t) ⎢ c ... c ⎥
⎢⎣ p1 pn ⎥

y2 (t) = c21 x1 (t) + ... + c2n xn (t) + d 21u1 (t) + ... + d 2mum (t) ⎡ d ... d1m ⎤
⎢ 11

 D=⎢    ⎥
⎢ d ... d pm ⎥
y p (t) = c p1 x1 (t) + ... + c pn xn (t) + d p1u1 (t) + ... + d pmum (t) ⎢⎣ p1 ⎥⎦

3.4
Examples
RLC  Circuit Amplifier  Circuit
dv (t) dvC (t) vC (t) v1 (t)
C C = iC (t) = iL (t)
1
=− 1
+
dt dt R1C1 R1C1
di (t) dvC (t) vC (t) vC (t) v1 (t)
L L = v L (t) = v1 (t) − vR (t) − vC (t) 0
=− 0
− 1
+
dt dt R0C0 R1C0 R1C0
⎡ 1 ⎤ ⎡
⎢ −
1
0


⎡ 1



⎢ 0 ⎥ ⎡ 0 ⎤ ⎢ R C ⎥ ⎢ R1C1 ⎥
 =⎢ +
1 1


x(t)= ⎢ C ⎥ x(t) + ⎢ ⎥
u(t)
x(t) ⎥ x(t) ⎢ 1 ⎥ u(t)
⎢ 1 ⎥ ⎢ − 1 −
1 ⎥ ⎢ ⎥
⎢ 1 R ⎥ ⎢ R1C0 R0C0 ⎥ ⎢ R1C0 ⎥
⎢ − − ⎥ ⎢⎣ L ⎥⎦ ⎣ ⎦ ⎣ ⎦
⎣ L L ⎦
y(t) = ⎡ 0 −1 ⎤ x(t)
⎣ ⎦

⎡ x2 (t) ⎤ nonlinear
⎢ ⎥
 =⎢ d
cf. pendulum x(t) g ⎥
⎢ − m x2 (t) − l sin x1 (t) ⎥
⎣ ⎦
3.5
System  solution
•  Since  system  is  time  invariant,  assume  we  are  given
–  Initial  condition x0 ∈R
n

–  The  input  values u(•) : [0,T ] → 


m

•  Compute  the  system  solution x(•) : [0,T ] →  n

 = Ax(t) + Bu(t), ∀t ∈ ⎡⎣0,T ⎤⎦


x(0) = x0 and x(t)

•  If  we  can  do  this,  then  output y(•) : [0,T ] →  p

y(t) = Cx(t) + Du(t), ∀t ∈ ⎡⎣0,T ⎤⎦

•  For  simplicity  assume  input  continuous  function  of  time  

3.6
State  solution
The  system  solution  is
t
x(t) = Φ(t)x0 + ∫ Φ(t − τ )Bu(τ ) d τ
0

where 2 2 k k
A t A t
Φ(t) = e = I + At +
At
+ ... + + ... ∈R n×n

2! k!
State transition
matrix
and  the  integral  is  computed  element  by  element
2 2
a t
e = 1+ at + + ... if a ∈
(cf.  Taylor  series  expansion:                                                                                                                  )
at

2!
3.7
Output  solution
Simply  combine  state  solution
t
x(t) = Φ(t)x0 + ∫ Φ(t − τ )Bu(τ ) d τ
0

with  output  map




y(t) = Cx(t) + Du(t)
to  obtain
t
y(t) = CΦ(t)x0 + ∫ CΦ(t − τ )Bu(τ ) d τ + Du(t)
0

3.8
Transition  matrix  properties

Fact  3.1:  The  state  transition  


matrix  is  such  that Exercise:  Prove  properties  1,  
2  and  that  (harder)
1. Φ(0) = I
Φ(t)Φ(−t) = Φ(−t)Φ(t) = I
d
2. Φ(t) = AΦ(t)
dt Exercise:  By  invoking  the  
existence  discussion  from  
3. Φ(−t) = [Φ(t)]−1
Notes  2,  show  property  4  
(harder).
4. Φ(t1 + t2 ) = Φ(t1 )Φ(t2 )

3.9
Proof  of  solution  formula  (sketch)
•  Clearly
0
x(0) = Φ(0)x0 + ∫ Φ(0 − τ )Bu(τ ) d τ = x0
0

•  To  show  that                                                                  use  the  


 = Ax(t) + Bu(t)
x(t)
LeibniX  formula  for  differentiating  integrals    
d g (t ) d

dt f (t )
l(t,τ ) d τ = l(t, g(t)) g(t)
dt
d
− l(t, f (t)) f (t)
dt
g (t ) ∂
+∫ l(t,τ ) d τ
f (t ) ∂t

3.10
Example:  RC  circuit
+ _
•  Inputs: u(t) = vs (t) Voltage
•  States: x(t) = vC (t) R
Input + iC(t)
•  Initial  condition: x0 = vC (0) +
- vC(t) _ C
•  State  space  equations vS (t)
1 1
 =−
x(t) x(t) + u(t)
RC RC Exercise:  Derive  the  
•  Response  to  step  with  amplitude  1V state  space  equations.  
What  are  the  

t
⎛ −
t
⎞ “matrices”  A  and  B?
x(t) = e RC
x0 + ⎜ 1− e RC ⎟
⎝ ⎠ Exercise:  Derive  the  
step  response
3.11
State  solution  structure
The  solution  consists  of  two  parts:
t
x(t) = Φ(t)x0 + ∫ Φ(t − τ )Bu(τ ) d τ
0

Zero Zero
Total
transition
= Input + State
transition transition

u(t) = 0∀t ⇒ x(t) = ZIT Linear function of initial state

•  Linear function of input


x0 = 0 ⇒ x(t) = ZST
•  Convolution integral

3.12
Superposition  principle
•  ZST  linear  in  input  trajectory
•  ZST  under  input u1 (⋅) : [0,t] →  m

t
x1 (t) = ∫ Φ(t − τ )Bu1 (τ ) d τ
0
•  ZST  under  input
u2 (⋅) : [0,t] →  m
t
x2 (t) = ∫ Φ(t − τ )Bu2 (τ ) d τ
0
•  ZST  under  input                                                                            for
u(τ ) = a1u1 (τ ) + a2u2 (τ ) τ ∈[0,t]
t
a1 ,a2 ∈, x(t) = ∫ Φ(t − τ )Bu(τ ) d τ
              0
t
= ∫ Φ(t − τ )B(a1u1 (τ ) + a2u2 (τ )) d τ
0

= a1 x1 (t) + a2 x2 (t)
3.13
Output  solution  structure
The  solution  consists  of  two  parts:
t
y(t) = CΦ(t)x0 + C ∫ Φ(t − τ )Bu(τ ) d τ + Du(t)
0

Zero Zero
Total
Response
= Input + State
Response Response

u(t) = 0∀t ⇒ y(t) = ZIR Linear function of initial state

•  Linear function of input


x0 = 0 ⇒ y(t) = ZSR •  cf. linear system definition in SS1
•  Convolution integral

3.14
Zero  input  transition
•  If  we  know  the  state  transition  matrix  we  can  (in  
principle)  compute  all  solutions  of  linear  system
•  Given  matrix  A  would  like  to  compute
k k
A t
Φ(t) = e = I + At +… +
At
+…
k!
•  Many  ways  of  doing  this
–  Summing  infinite  series  (in  some  rare  cases!)
–  Using  eigenvalues  and  eigenvectors  (this  set  of  notes)
–  Using  the  Laplace  transform  (later)
–  Numerically  (later)
•  Using  eigenvalues  at  least  two  methods
–  Using  Cayley  Hamilton  Theorem  (Theorem  2.1)
–  Using  eigenvalue  decomposition  (used  here)
3.15
E-­‐‑values  and  E-­‐‑vectors:  Rough  idea
Awi = λi wi
•  Recall  that  (p.  2.15)    
•  ZIT
 = Ax(t) ⇒ x(t) = Φ(t)x(0)
x(t)
x(0) = wi  = Awi = λi wi
⇒ x(0)

•  i.e.  if  we  start  on  e-­‐‑vector  we  stay  on  e-­‐‑vector
•  x(t)
                   increases/decreases  depending  on  sign  of  λ
•  E.g. x2
w2 w1
x (0)
n = 2, λ1 < 0, λ2 > 0
x(0) x (0)
x1

3.16
Transition  matrix  computation
•  Change  of  coordinates  using  matrix  of  eigenvectors
•  Assume  matrix  diagonalizable  (p.2.17)

AW = W Λ ⇒ A = W ΛW −1 Exercise:  Prove  
by  induction  that
Matrix of
•  Therefore  (Fact  2.18) Ak = W Λ kW −1
e-vectors
Λt −1

Φ(t) = e = We W
At

Exercise:  Prove  this


where ⎡ λ1t
e ... 0 ⎤

Λt

e =⎢    ⎥
⎢ 0 ... e
λn t ⎥
⎢⎣ ⎥⎦
3.17
Example:  RLC  circuit
•  Recall  that  (p.  1.14)

d ⎡ vC (t) ⎤ ⎡ 0 1 C ⎤ ⎡ vC (t) ⎤ ⎡ 0 ⎤ v (t)


⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ + 1L
dt ⎣ iL (t) ⎦ ⎣ −1 L − R L ⎦ ⎣ iL (t) ⎦ ⎢⎣ ⎥⎦ s

•  Set  R=3,  C=0.5,  L=1 vR (t) vL (t)
+ - + - iL (t )
A=⎡ 0 2 ⎤
⎣ −1 −3 ⎦ +
R L
+
•  Eigenvalues: v1 (t) vC (t)
C
λ1 = −1, λ2 = −2 -
-
•  Eigenvectors:
w1 = ⎡ 2 ⎤ ,w2 = ⎡ 1 ⎤
⎣ −1 ⎦ ⎣ −1 ⎦
3.18
RLC  circuit:  Transition  matrix
⎡ ⎤ ⎡ −t
⎤ ⎡ ⎤ Using  Matlab:
Λt
e = We W
At −1
= 2 1 e 0 1 1
⎣ −1 −1 ⎦ ⎢⎣ 0 e −2t ⎥⎦ ⎣ −1 −2 ⎦
>>  A=[0  2;-­‐‑1  -­‐‑3];
>>  [W,L]=eig(A)

W e Λt W −1 W  =

               0.8944      -­‐‑0.7071
             -­‐‑0.4472        0.7071

Φ(t) = 2e −t − e −2t 2e −t − 2e −2t ⎤


⎡ −t −2t −t −2t

⎢⎣ −e + e −e + 2e ⎥⎦ L  =

             -­‐‑1          0
               0        -­‐‑2

3.19
RLC  circuit:  ZIT
x(0) = w1 ⇒ x(t) = Φ(t) ⎡ 2 ⎤ = e −t ⎡ 2 ⎤
⎣ −1 ⎦ ⎣ −1 ⎦

x(0) = w2 ⇒ x(t) = Φ(t) ⎡ 1 ⎤ = e −2t ⎡ 1 ⎤


⎣ −1 ⎦ ⎣ −1 ⎦
x(0) = a1w1 + a2 w2 ⇒ x(t) = Φ(t)(a1w1 + a2 w2 )
⇒ x(t) = a1e −t w1 + a2e −2t w2 ⎯t→∞
⎯⎯ →⎡ 0 ⎤
⎣0⎦
•  Because  w1  and  w2  linearly  independent  they  form  a  
R2
basis  for                  (p.  2.8)
•  Therefore  all  initial  conditions  can  be  wrioen  as
x(0) = a1w1 + a2 w2
•  Therefore  ZIT  à  0  for  all  initial  conditions

3.20
RLC  circuit:  Step  input
ZST with u(t) = V , for t ≥ 0
t

x(t) = ∫ Φ(t − τ )Bu(τ ) d τ


0
t

= Φ(t) ∫ Φ(−τ )BV d τ


0

= ⎡ −2e −t
+ e −2t
+ 1 ⎤V ⎯t→∞
⎯⎯ →⎡ V ⎤
⎢⎣ e − e
−t −2t ⎥⎦ ⎣0⎦

3.21
Notes:  Diagonalizable  matrices
•  For  diagonalizable  matrices,  state  transition  matrix  linear  
combination  of  terms  of  the  form eλit
•  Generally λi = σ ± jω ∈C,σ ,ω ∈
•  So  ZIT  linear  combination  of  terms  of  the  form
    if λi = 0 (σ = 0,ω = 0)
–  1
eσ t if λi = σ (σ ≠ 0,ω = 0)
–   
  ω t)
–  sin( and cos(ω t) if λi = ± jω (σ = 0,ω ≠ 0)
σt σt
  sin(ω t) and e cos(ω t) if λi = σ ± jω (σ ≠ 0,ω ≠ 0)
–  e
λi = σ ± jω
•  Part  of  ZIT  corresponding  to  
σ = 0,ω = 0
–  Constant  if  
–  Converges  to  0  if σ < 0
–  Periodic  if σ = 0,ω ≠ 0
–  Goes  to  infinity  if σ > 0
3.22
Typical  ZIT  for  diagonalizable  matrices

Im(λ)

Re(λ)

3.23
Stability:  Zero  input  transition
•  Consider  the  system x(t)  = Ax(t) + Bu(t)
•  Let  x(t)  be  its  ZIT x(t) = Φ(t)x0 = e At x0

Definition: The system is called stable if for all ε > 0


there exists δ > 0 such that
if x0 ≤ δ then x(t) ≤ ε for all t ≥ 0.
Otherwise the system is called unstable.

•  i.e.  if  the  state  starts  small  it  stays  small  (p.  2.4)
•  or  you  can  keep  the  state  as  close  as  you  want  to  
0  by  starting  close  enough
3.24
Asymptotic  stability
Definition: The system is called asymptotically stable
if it is stable and in addition
x(t) → 0 as t → ∞

•  i.e.  not  only  do  we  stay  close  to  0  but  also  
converge  to  it
Exercise:  Show  that x(t) → 0
if  and  only  if x(t) → 0
•  Note  that
–  Definitions  do  not  require  diagonalizable  matrices
–  In  fact  we,  will  see  that  they  also  work  for  
nonlinear  systems  (Notes  7)

3.25
Diagonalizable  matrices
Theorem 3.1: System with diagonalizable A matrix is:
•  Stable if and only if Re[λi ] ≤ 0,∀i
•  Asymptotically stable if and only if Re[λi ] < 0 ∀i
•  Unstable if and only if ∃i : Re[ λi ] > 0

•  Proof:  By  inspection!


λi t
•  Transition  matrix  linear  combination  of e
–  Re[
                                                       for  all  initial  conditions  ZIT  tends  
λi ] < 0,∀i ⇒
to  zero
–                                                         ZIT  remains  bounded  and  for  
Re[λi ] ≤ 0 ∀i ⇒
some  initial  conditions  is  periodic  (or  constant)
–                                                         for  some  initial  conditions  ZIT  
∃i : Re[λi ] > 0 ⇒
tends  to  infinity
3.26
Phase  plane  plots
•  For two state variables (n=2)
•  x2(t) vs x1(t) parameterized by t for different initial states
Trajectory plot Phase plane plot
x2(t)/x1(t) vs t x2(t) vs x1(t)

w1
Stable
node

w2

3.27
Phase  plane  plots
w1
Saddle
point

w2

w1
Unstable
node

w2
3.28
Phase  plane  plots

Stable
focus

Center

3.29
Phase  plane  plots

Unstable
focus

+
+ +

+
+

+
+ +
+ +
+

3.30
Non-­‐‑diagonalizable  matrices  (examples)
Exercise:  What  are  the  
A1 = ⎡ 0 1 ⎤ ⇒ e 1 = ⎡ 1 t ⎤
At eigenvalues  of  A1  and  A2?  
⎣0 0⎦ ⎣0 1⎦ What  are  the  eigenvectors?  
What  goes  wrong  with  their  
diagonalization?
A2 = ⎡ −1 1 ⎤ A2t
⇒e = ⎡ e −t
te −t

⎣ 0 −1 ⎦ ⎢⎣ 0 e −t ⎥⎦ Exercise:  Prove  the  formulas  
for  the  transition  matrices

Exercise:  Repeat  the  computations  for  the  matrices


⎡ −1 1 0 ⎤ ⎡ −1 1 0 ⎤
A3 = ⎢ 0 −1 1 ⎥ , A4 = ⎢ 0 −1 0 ⎥
⎢⎣ 0 0 −1 ⎥⎦ ⎢⎣ 0 0 −1 ⎥⎦

3.31
Non-­‐‑diagonalizable  matrices  (general)
•  Distinct  e-­‐‑values  à  matrix  diagonalizable  (Fact  2.18)
λ1 = λ2 = … = λr = σ ± jω
•  Assume  some  e-­‐‑value  repeated  r  times,  
•  In  general,  ZIT  linear  combination  of  terms  of  the  form

–  1,t,t
    2 ,…,t r-1 if σ = 0,ω = 0
σt σt r-1 σ t
–  e   ,te ,…,t e if σ ≠ 0,ω = 0

–   
sin( ω t),cos(ω t),t sin(ω t),…,t r−1
cos(ω t) if σ = 0,ω ≠ 0

–  e  
σt
sin(ω t),eσ t cos(ω t),…,t r−1eσ t cos(ω t) if σ ≠ 0,ω ≠ 0
•  Can  be  shown  using  generalized  eigenvectors  and  Jordan  
canonical  form

3.32
Non-­‐‑diagonalizable  matrices  (general)

Note  that:
•  If σ < 0,
–   
t k eσ t , t k eσ t cos ω t, t k eσ t sin ω t ⎯t→∞
⎯⎯ →0
–  Hence  ZIT  tends  to  zero  (for  some  initial  states)
•  If σ > 0,
–   
t k eσ t , t k eσ t cos ω t, t k eσ t sin ω t ⎯t→∞
⎯⎯ →∞
–  Hence  ZIT  tends  to  infinity  (for  some  initial  states)

•  If  
σ = 0,
1,cos ω t, sin ω t
–                                                       remain  bounded
–   
t k , t k cos ω t, t k sin ω t ⎯t→∞ ⎯⎯ → ∞ for k ≥ 1
–  ZIT  may  remain  bounded  or  tend  to  infinity
–  Cannot  tell  just  by  looking  at  e-­‐‑values  
3.33
Non-­‐‑diagonalizable  matrices:  Stability
Theorem 3.2: The system is:
•  Asymptotically stable if and only if Re[λi ] < 0 ∀i
•  Unstable if ∃i : Re[ λi ] > 0
•  Subtle  differences  with  diagonalizable  case
–  Asymptotic  stability  condition  the  same
–  Instability  condition  sufficient  but  not  necessary
•  Reason  is  that  if                                                                                          then  
∀i Re[λi ] ≤ 0 but ∃i Re[λi ] = 0
stability  not  determined    by  e-­‐‑values  alone.
–  ZIT  may  remain  bounded  for  all  initial  conditions
–  ZIT  may  go  to  infinity  for  some  initial  conditions
–  If  matrix  non-­‐‑diagonalizable,  but  no  e-­‐‑value  with                              
                                   is  repeated  then  Theorem  3.1  still  applies
Re[λi ] = 0
3.34
Zero  state  transition:  Dirac  function
•  Can  be  thought  of  as  a  function  of  time  which  is
–  Infinite  at  t=0
⎧⎪ ∞ if t = 0
–  Zero  everywhere  else δ (t) = ⎨

⎪⎩ 0 if t ≠ 0
–  Satisfies ∫ −∞
δ (t) = 1

•  Can    be  thought  of  as  the  limit  as  ε à  0  of  (among  others)

⎪ 0 if t < 0

1ε ⎪ 1
δ ε (t) = ⎨ if 0 ≤ t < ε
⎪ ε

⎪ 0 if t ≥ ε
ε t

3.35
Impulse  transition  h(t)  (n=m=1)
  = m = 1 ⇒
•  n x(t) ∈R, u(t) ∈R
 = ax(t) + bu(t), a ∈R, b ∈R
x(t)
•  State  impulse  transition  h(t)  is  ZST (x0 = 0) for u(t) = δ (t)
t
h(t) = ∫ Φ(t − τ )Bδ (τ )dτ Exercise:  Show  that  
0
t
impulse  transition  also  
=e ∫ e −aτ bδ (τ ) d τ = e at b ZIT  for  appropriate  x0
at
0

•  General  ZST  convolution  of  impulse  transition  with  input


t t
x(t) = ∫ Φ(t − τ )Bu(τ )dτ = ∫ e a(t−τ )bu(τ ) d τ
0 0
t
= ∫ h(t − τ )u(τ ) d τ = (h ∗ u)(t)
0
3.36
Example:  RC  circuit  (p.  3.11)
1 1
•  For  the  RC  circuit: A = − ∈, B = ∈
RC RC
t

Φ(t) = e RC

•  Impulse  transition
t
1 −
h(t) = Φ(t)b = e RC
RC
•  Unit  step  response:  ZST  with
⎧⎪ 1 t ≥ 0 t ⎛ −
t

u(t) = ⎨ ⇒ x(t) = ∫ h(t − τ ) ⋅1⋅ d τ = ⎜ 1− e ⎟RC

⎪⎩ 0 t < 0 0
⎝ ⎠
3.37
Impulse  transition  H(t)  (general)
•  For  general  n,  m  impulse  transition  given  by  matrix
⎡ h (t) . . h1m (t) ⎤
⎢ 11 ⎥
⎢ ⎥
H (t) = ⎢ . . . .
⎥ = Φ(t)B ∈ n×m

. . . .
⎢ ⎥
⎢⎣ hn1 (t) . . hnm (t) ⎥

•  hij(t)  equal  to  xi(t)  when
x(0) = 0
–    H (t) = Φ(t)B
u j (t) = δ (t), uk (t) = 0 k ≠ j
–   
•  Again,  ZST  convolution  of  input  with  impulse  transition  
x(t) = (H ∗ u)(t) Integral computed
element by element
3.38
Output  impulse  response  K(t)
•  Usually  interested  in  input-­‐‑output  behavior
•  Output  impulse  response:  output  solution  to
–  Input  δ(t)
–  Initial  state  x(0)=0
•  Combine  impulse  transition  formula  and  output  
map,  output  impulse  response  given  by

K (t) = CΦ(t)B + Dδ (t) ∈R p×m

and  output  ZSR  to  input  u(t)  is Exercise:  Verify    


this  using  the  
y(t) = (K ∗ u)(t) formula  on  p.  3.8
3.39
Stability  with  inputs:  Zero  state  transition
 = Ax(t) + Bu(t)
x(t)
•  Consider  the  system  
t
•  Zero  state  transition ∫
x(t) = Φ(t − τ )Bu(τ ) d τ
0

Theorem 3.3: Assume that Re[λi ] < 0 ∀i . Then there


exists α ≥ 0 such that ZST, x(t), satisfies
u(t) ≤ M ∀t ≥ 0 ⇒ x(t) ≤ α M ∀t ≥ 0
If in addition u(t) ⎯⎯⎯
→ 0 then x(t) ⎯⎯⎯
t→∞
→ 0. t→∞

•  So  small  inputs  lead  to  small  states.  If  in  addition  


the  input  goes  to  zero,  so  does  the  state
•  Asymptotic  stability  needed,                                    not  enough
Re[λi ] ≤ 0
3.40
Stability  with  inputs:  Full  response
 = Ax(t) + Bu(t)
x(t)

y(t) = Cx(t) + Du(t)

•  Complete  response  =  ZIR  +  ZSR


•  If                                              
Re[λi ] < 0 ∀i
•  ZIT/ZIR  and  ZST/ZSR  small  if  input  and  x(0)  small
•  Bounded  input,  bounded  state  (BIBS)  property
•  Bounded  input,  bounded  output  (BIBO)  property
•  ZIT/ZIR  and  ZST/ZSR  tend  to  0  if  input  tends  to  0
•  Hence  output  tends  to  0  if  input  tends  to  0
3.41
Signal-­‐‑  und  Systemtheorie  II  
D-­‐‑ITET,  Semester  4  
 
Notes  4:  Energy,  Controllability,  
Observability
John Lygeros

Automatic  Control  Laboratory,  ETH  Zürich


www.control.ethz.ch
Wien  oscillator:  Circuit
i0 (t)
(k −1)R
i1 (t) R iin
- iout
- + +v
vR - in + +
C1 R1
+ - v0 (t)
+ vC (t)
R2 C2
- vC2 (t)
1

Ideal amplifier iin = 0 ⇒ i1 (t) = i0 (t) ⎫



⎧ vC (t) ⎪
⎪ i1 (t) = − 2

⎪ R ⎬ ⇒ v0 (t) = kvC2 (t)
vin = 0 ⇒ vR (t) = vC (t) ⇒ ⎨
2
⎪ vC (t) − v0 (t) ⎪
i0 (t) = 2 ⎪
⎪ (k −1)R ⎪
⎩ ⎭
4.2
Wien  oscillator:  State  equations
•  Linear circuit
•  State Variables: vC (t),vC (t)
1 2
vC (t) dvC (t) dvC 2 (t)
⎡ v (t) ⎤ •  KCL:
2
+ C1 1
+ C2 =0
x(t) = ⎢ C1
⎥ ∈R 2 R2 dt dt
⎢ v (t) ⎥ dvC (t)
⎢⎣ C2 ⎥⎦ •  KVL: v (t) − v (t) − R C
C2 C1 1 1
1
− kvC (t) = 0
dt 2

•  Input Variable: none


(autonomous)
⎡ 1 1− k ⎤
⎡ v (t) ⎤ ⎡ v (t) ⎤ ⎢ − ⎥
d ⎢ 1 C
⎥ = A ⎢ C1 ⎥ dx(t) ⎢ R1C1 R1C1 ⎥
=⎢ ⎥ x(t)
dt ⎢ vC (t) ⎥ ⎢ v (t) ⎥ dt 1 1 k −1 ⎥
⎢⎣ 2 ⎥⎦ ⎢⎣ C2 ⎥⎦ ⎢ − +
⎢ R1C2 R2C2 R1C2 ⎥
⎣ ⎦

4.3
Wien  oscillator:  Response
•  For  simplicity  set R1 = R2 = R,C1 = C2 = C
•  Autonomous  system  (ZIΤ)
•  Stability  deftermined  by  sign  of  the  real  part  of  eigenvalues                      
•  Eigenvalues  are  the  roots  of  the  characteristic  polynomial
3− k 1
det(λ I − A) = 0 ⇒ λ + 2
λ+ 2
=0
RC (RC)
•  For  second  order  polynomials
aλ 2 + bλ + c = 0
the  sign  of  real  part  of  roots  determined  by  signs  of  a,  b,  c
•  This  is  NOT  true  for  higher  order  polynomials,  we  need  
HurwiJ  test

4.4
Wien  oscillator:  Stability
a, b, c same sign ⇔ ∀i,Re ⎡⎣λi ⎤⎦ < 0 ⇔ Asymp.
stable Exercise:  
a, b, c not same Prove  this
sign
⇔ ∃i,Re ⎡⎣λi ⎤⎦ > 0 ⇔ Unstable

Some of
a, b, c = 0 à Degenerate case
Response
⎡ ⎤
k < 3 ⇔ ∀i,Re ⎣λi ⎦ < 0 ⇒ goes to 0
j Response oscillates with
k = 3 ⇔ λi = ± ⇔ frequency ω=1/RC
RC
Response goes to
⎡ ⎤
k > 3 ⇔ Re ⎣λi ⎦ > 0 ⇒ infinity (generally)

4.5
Wien  oscillator:  Eigenvalue  locus
k − 3 ± (k −1)(k − 5) Exercise:  
•  Eigenvalues  are λ =
2RC Show  this

Exercise:  Simulate    
•  Real  and  negative 0 < k ≤ 1 the  Wien  oscillator  
1< k < 3
•  Complex,  negative  real  part for  k  =  0.5,  2,  3,  4,  6  
and  plot  x1  vs.  x2
•  Imaginary k = 3
Exercise:  Plot  locus  
•  Complex,  positive  real  part 3 < k < 5 of  the  e-­‐‑values  as  k  
goes  from  0  to  
•  Real  and  positive k ≥ 5 infinity  (in  matlab)
•  Roots  real  and  equal                                  (critical  damping)  
k = 1 or 5

4.6
System  energy
•  For  the  Wien  oscillator
1 1 1 ⎡ C 0 ⎤
E(t) = C1vC2 (t) + C2 vC2 (t) = x(t)T ⎢ 1 ⎥ x(t)
2 1
2 2
2 ⎢ 0 C2 ⎥
⎣ ⎦
1
•  Quadratic  function  of  the  state E(t) = x(t)T Qx(t)
2
•  Matrix  Q  
–  Symmetric  (Q=QT)  (in  this  case  diagonal)
–  Positive  definite Q > 0, i.e. x Qx > 0 ∀x ≠0
T

•  Any  quadratic  with  Q  that  satisfies  these  properties  


serves  as  an  “energy  like”  function Exercise:  Find  Q̂
•  Example:  Coordinate  change such  that
x̂(t) = Tx(t)
                                           for  T  invertible   1
E(t) = x̂(t)T Q̂x̂(t)
  2
4.7
System  power
•  Instantaneous  change  in  energy
dE(t) x T (t)Qx(t) x T (t)Qx(t)

P(t) = = +
dt 2 2

=
( ) (
x(t)T AT + u(t)T BT Qx(t) x(t)T Q Ax(t) + Bu(t)
+
)
2 2

=
( ) (
x(t)T AT Q + QA x(t) u(t)T BT Qx(t) + x(t)T QBu(t)
+
)
2 2
•  For  autonomous  systems  or  when  u=0  (ZIT)

P(t) =
( )
x(t)T AT Q + QA x(t)
=−
x(t)T Rx(t)
(
, for R = − AT Q + QA )
2 2

4.8
System  power
•  Power  also  a  quadratic  of  the  state Exercise:  Show  
•  Matrix  R  is  symmetric R  is  symmetric
•  If  it  is  positive  definite R > 0, i.e. x T Rx > 0 ∀x ≠ 0
then  energy  decreases  all  the  time
•  Natural  to  assume  that  in  this  case  system  is  stable

Exercise:  Compute  power  for  the  Wien  oscillator.  


For  which  values  of  k  is  R  positive  definite?

Theorem 4.1: The eigenvalues of A have negative real part if and


only if for all R = RT > 0 there exists a unique Q = Q T > 0
such that AT Q + QA = −R

4.9
Lyapunov  equation
•  Lyapunov  equation

AT Q + QA = −R
•  A  and  R  known,  linear  equation  in  unknown  Q
•  Can  be  re-­‐‑wri[en  as Elements  
of  Q
Elements   Âq = r Elements  
of  A of  R
•  Because  Q  and  R  symmetric  n(n+1)/2  equations  in  
       n(n+1)/2  unknowns
•  Fact  2.11  à  equation  has:
–  Unique  solution  if          non-­‐‑
 singular
–  Multiple  solutions  or  no  solutions  if          singular

4.10
Lypunov  functions
•  Linear  version  of  Lyapunov  Theorem  (Notes  7)
•  Possible  to  solve                                                          efficiently  (e.g.  Matlab)  
AT Q + QA = −R
•  For  any                                        solving  for  Lyapunov  equation  for  
R = RT > 0
unknown  Q  allows  us  to  determine  stability  of x(t)  = Ax(t)
–  Unique  positive  definite  solution  à  Asymptotically  stable
–  No  solution,  multiple  solutions
à Not asymptotically stable
–  Non-­‐‑positive  definite  solution
V : Rn → R
•  Resulting  energy-­‐‑like  function

1 T Exercise:  Why  is  
V (x) = x Qx Lyapunov  equation  
2
linear?  Is  Lyapunov  
known  as  Lyapunov  function function  linear?
4.11
Input-­‐‑State-­‐‑Output  relations
•  Investigate  the  effect  of
–  Input  on  state
–  State  on  output
•  Two  fundamental  questions
1.  Can  I  use  inputs  to  “drive”  state  to  desired  value
2.  Can  I  infer  what  the  state  is  by  looking  at  output
•  Answer  to  1.  à  Controllability
•  Answer  to  2.  à  Observability
•  Answers  hidden  in  structure  of  matrices  A,  B,  and  C

4.12
Controllability
•  Consider  a  linear  system
dx
(t) = Ax(t) + Bu(t)
dt x ∈R n ,u ∈R m , y ∈R p
y(t) = Cx(t) + Du(t)

Definition: The systemn is called controllable over [0, t]


if for all x(0) = nx0 ∈ initial conditions and all
terminal x1 ∈ conditions there exists an input
u(⋅) : [0,t] →  m such that x(t) = x1

4.13
Observations
x0 , x1
•  In  other  words:  For  any                        we  can  find                  
u(⋅) : [0,t] →  m such  that
t

x1 = e At x0 + ∫ e A(t−τ ) Bu(τ )dτ


0
•  Observations:
x0 x1
–  Input  can  drive  the  state  from            to            in  time  t,  
x1
but  not  necessarily  keep  it  at            afterwards
–  Input  to  state  relation,  outputs  play  no  role  and  
can  be  ignored  for  the  time  being
–  Definition  generalizes  to  other  systems  (nonlinear,  
time  varying,  etc.)  but  more  care  is  needed

4.14
Observations
Fact 4.1: The
n
system is controllable over [0, t] if and only if for
all x1 ∈ there exists an input u(•) : [0,t] →  m such that
x(t ) = x1 starting at x(0) = 0

Proof: Exercise:  Prove  “only  if”  part
If:  To  drive  the  system  from  x(0)=x0  to  x(t)=x 1  use  input  that  drives  
it  from x (0) = 0 to x (t) = x1 − e x0
At



Fact 4.2: The system is controllable over [0, t] if and only if for

all x0 ∈R n there exists an input u(•) : [0,t] →  m such that

= 0 starting at
x(t) x(0) = x0

Proof: Exercise:  Prove  “only  if”  part
If:  To  drive  the  system  from  x(0)=x0  to  x(t)=x1  use  input  that  drives  
x (0) = x0 − e − At x1 to x (t) = 0
it  from
4.15
Controllability  gramian
Given  time  t,  define  controllability  gramian
t Exercise:  Show  that
WC (t) = ∫ e Aτ BBT e A τ d τ ∈R n×n
T

W (t) = W T
(t) ≥ 0
0
C C

Fact 4.3: The system is controllable over [0, t] if and only if


WC(t) is invertible

x0 ∈ n x1 ∈ n
Proof:  If.  Drive  system  from                            to                          in  time  t.  
By  Fact  4.1,  assume  x0=0  and  select
T AT (t−τ ) Exercise:  Complete  
u(τ ) = B e −1
WC (t) x1 , τ ∈[0,t]
the  “if”  part

4.16
Controllability  gramian
Only  if:  If  WC(t)  is  not  invertible  then  (Fact  2.12,  2.17)  
z ∈ n
there  exists                          with                    such  that
z≠0
t

WC (t)z = 0 ⇔ z WC (t)z = 0 ⇔ ∫ z e BB e z d τ = 0
T T Aτ T AT τ
0
t
2

⇔ ∫ z e B d τ = 0 ⇔ z e B = 0 for all τ ∈[0,t]
T Aτ T Aτ

0 t


Therefore z T x(t) = z T e A(t−τ ) Bu(τ ) d τ = 0
0

and  only  x(t)  orthogonal  to  z  can  be  reached  from


x(0)=0.  By  Fact  4.1,  the  system  is  not  controllable

4.17
Controllability  test
Define  the  controllability  matrix

P = [B AB A2 B  An−1B] ∈R n×nm


Theorem 4.2: The system is controllable over [0, t] if and only
if the rank of P is n

The  rank  of  P  is  at  most  n  since  it  has  n  rows  (Fact  2.5)
Proof:  We  know  that  the  system  is  controllable  over  [0,  t]  if  and  
z ∈ n
only  if  WC(t)  is  invertible.  WC(t)  is  invertible  if  and  only  if  for
WC(t)z=0  ó  z=0  
(else  WC(t)  has  0  as  an  eigenvalue,  Fact  2.17).  If  we  can  show
WC(t)  z=0  ó  PTz=0
this  would  imply  PT  has  rank  n  if  and  only  if  WC(t)  is  invertible.

4.18
Controllability  test:  Proof
As  in  the  proof  of  Fact  4.3  we  can  show  that
T AT τ
WC (t)z = 0 ⇔ B e z = 0 for all τ ∈[0,t]
T
By  Taylor  series,  the  last  part  holds  if  and  only  if  BTeA τz  and  all  its  
derivatives  at  τ=0  are  equal  to  zero,  in  other  words  
T AT τ d T AT τ

B e z = B T
z = 0 B e z = BT AT z = 0
τ =0 dτ τ =0
and  so  on,  until
d n−1 T AT τ
B e z = B T
(A ) z=0
n−1 T


dτ n−1
τ =0
Higher  derivatives  (involving  An,  An+1,  etc.)  are  then  automatically  zero  
by  the  Cayley-­‐‑Hamilton  Theorem  2.1.  Summarizing


W C
(t)z = 0 ⇔ P T
z=0
and  the  system  is  controllable  if  and  only  if  P  has  rank  n.

4.19
Example:  OpAmp  circuit
Ideal amplifier:
i1 (t) = i0 (t) + iC (t) + iL (t)
vin (t) vC (t) dvC (t)
⇒ = +C + iL (t)
R1 R0 dt
iL (t) L
dvC (t) 1 1 1
⇒ = − iL (t) − vC (t) + vin (t) +
vC (t)
dt C R0C R1C -
iC (t)
C
diL (t) diL (t) vC (t) i0 (t)
L = vC (t) ⇒ = R1 R0
dt dt L i1 (t)

v0 (t) = −vC (t) +


+
+
vin (t) − v0 (t)
-
4.20
Example:  OpAmp  circuit
The  state  space  matrixes  are:
⎡ 0 1 L ⎤ ⎡ ⎤
0 ⎡ 0 −1 ⎤
A = ⎢ ⎥ , B = ⎢ ⎥ ,C =
⎢⎣ −1 C −1 R0C ⎥⎦ ⎢⎣ 1 R1C ⎥⎦ ⎣ ⎦

Note  that  the  input  affects  only  one  of  the  two  states  directly
It  can  use  this  state  to  influence  the  second  state  through  the  
dynamics  encoded  in  the  matrix  A
⎡ 1 ⎤
⎢ 0 ⎥
⎢ R1 LC ⎥ −1
P=⎢ ⎥ ⇒ det(P) = 2 2 ≠ 0
⎢ 1 1 ⎥ R1 LC

⎢ R1C R0 R1C 2 ⎥
⎣ ⎦

4.21
Observations
•  Easy  test  for  controllability
•  Requires  matrix  multiplications  and  rank  test,  instead  
of  integration  of  matrix  exponential
•  Proof  of  Theorem  4.2  implies  the  following
n
Corollary 4.1: The set of x1 ∈ for which ∃u(⋅) : [0,t] → 
m

such that x(t) = x1 starting at x(0) = 0 is equal to Range(P)

Fact 4.4: WC(t) is invertible for some t > 0


Exercise: Prove Fact 4.4
if and only if it is invertible for all t > 0.
•  Roughly  speaking,  if  the  system  is  controllable  can  
get  from  any  state  to  any  other  state  as  fast  as  we  like
•  The  faster  we  go,  the  more  the  “energy”  and  the  
bigger  the  inputs  we  will  need

4.22
Minimum  energy  inputs
Consider  as  the  “energy”  of  the  input  the  quantity
t t
2
∫ u(τ ) u(τ ) d τ = ∫ u(τ ) d τ
T

0 0
In  our  example  of  p.  4.20
t t
2
∫ u(τ )
0
d τ = R1 ∫ vin (τ )i1 (τ ) d τ = R1 i(energy provided by vin )
0

Theorem 4.2: Assume that the system is controllable. Given


x1 ∈ n and t > 0, the input that drives the system from
x(0)=0 to x(t)=x1 and has the minimum energy is given by
AT (t−τ )
um (τ ) = B e T
WC (t) −1 x1 , for τ ∈[0,t]

4.23
Minimum  energy  inputs:  Proof
Proof:  In  the  proof  of  Fact  4.3  we  saw  that  the  proposed  
um(.)  drives  the  system  from  x(0)=0  to  x(t)=x1.  Its  energy  is  
t t
u (τ )T u (τ ) d τ = x TW (t) −1 e A(t−τ ) BBT e AT (t−τ )W (t) −1 x d τ

0
m m ∫0
1 C C 1

⎛ t

= x1 WC (t) ⎜ ∫ e
T −1 A(t−τ )
BB e T AT (t−τ )
d τ ⎟ WC (t) −1 x1 = x1TWC (t) −1 x1
⎝0 ⎠
To  show  that  this  energy  is  minimum,  consider  any  other  
input  u(.)  that  drives  the  state  from  x(0)=0  to  x(t)=x1.  u(.)  can  
u(τ ) = um (τ ) + û(τ ).
be  wri[en  as                                                                          So  its  energy  will  be
t t

∫ τ τ τ = ∫ m τ + τ (um (τ ) + û(τ )) d τ
T T
u( ) u( ) d (u ( ) û( ))
0 0
t

= ∫ (um (τ )T um (τ ) + um (τ )T û(τ ) + û(τ )T um (τ ) + û(τ )T û(τ )) d τ


0

4.24
Minimum  energy  inputs:  Proof
t t

= ∫ e A(t−τ ) Bu(τ ) d τ = ∫ e A(t−τ ) B(um (τ ) + û(τ )) d τ


But x(t)
0
t
0
t t

= ∫ e A(t−τ ) Bum (τ ) d τ + ∫ e A(t−τ ) Bû(τ ) d τ = x1 + ∫ e A(t−τ ) Bû(τ ) d τ


0 0 0
t


Since x(t)=x1, we have that e A(t−τ ) Bû(τ ) d τ = 0
t t
0
t

∫0 m
and ∫0 ∫0 1 C e Bû(τ ) d τ = 0
−1 A(t−τ )
u (τ ) T
û(τ ) d τ = û(τ ) T
um
(τ ) d τ = x T
W (t)

Therefore
t t t

∫ τ τ τ ∫ τ τ τ ∫ m um (τ ) d τ
τ
−1
u( ) T
u( ) d = x1
T
WC
(t) x1
+ û( ) T
û( ) d ≥ u ( ) T

0 0 0

4.25
Observability
dx
(t) = Ax(t) + Bu(t)
dt x ∈R n ,u ∈R m , y ∈R p
y(t) = Cx(t) + Du(t)
Definition: The system is called observable over [0, t] if given
u(⋅) : [0,t] →  m and y(⋅) : [0,t] →  we can uniquely
p

determine the value of x(⋅) : [0,t] → 


n

•  Again time of observation, t, will turn out to play no role


•  Inputs play little role, just carried along
•  Generalizations (to e.g. non-linear systems) possible,
but care is needed

4.26
Initial  state  observability
t
•  Recall  that x(t) = e At x + e A(t−τ ) Bu(τ )dτ
0 ∫0

u(⋅) : [0,t] → 
•  Therefore  to  infer                                                          given                                                      
x(⋅) : [0,t] →  n
m

it  suffices  to  infer  the  initial  condition  x(0)=x0


•  Assume  that  two  initial  conditions,  x0  and          ,  under  the   x̂0
u(⋅) : [0,t] →  m
same  input                                                                lead  to  the  same  output
y(⋅) : [0,t] →  p , i.e. ∀τ ∈[0,t]
τ τ
Ce Aτ x0 + C ∫ e A(τ −s) Bu(s)d s + Du(τ ) = Ce Aτ x̂0 + C ∫ e A(τ −s) Bu(s)d s + Du(τ )
0 0

•  Then Ce Aτ (x0 − x̂0 ) = 0, for all τ ∈[0,t]


x ∈R n Ce Aτ x = 0 ∀τ ∈[0,t]
•                         such  that                                                                        called  unobservable  
4.27
Unobservable  states
Ce Aτ x = 0 τ ∈[0,t]
∈R nunobservable  if  and  only  if                                          for  all
•  x                    

Ce x = 0
•  Note  that  if  x=0  then                                        ,  so  x=0  is  an  
unobservable  state Exercise: Show that the
•  System  is  observable  if  and  only unobservable states form
if  x=0  is  the  only  unobservable  state a subspace
•  In  this  case  the  initial  state  x0  is  uniquely  determined  
by  the  zero  input  response  since  
Ce Aτ x0 = Ce Aτ x̂0 for all τ ∈[0,t]
⇔ Ce Aτ (x0 − x̂0 ) = 0 for all τ ∈[0,t]
⇔ (x0 − x̂0 ) unobservable
⇒ (x0 − x̂0 ) = 0 ⇒ x0 = x̂0
4.28
Observability
•  Note  that  two  initial  conditions  that  under  the  same  
input  lead  to  the  same  output  differ  by  an  
unobservable  state
•  By  a  Taylor  series  argument Ce Aτ x = 0 for all τ ∈[0,t]
if  an  only  if  all  its  derivatives  at  τ =  0  equal  to  0
Aτ d
Ce x = Cx = 0, Ce Aτ x = CAx = 0,
τ =0 dt ' τ =0
d n−1 Aτ
Ce x = CA n−1
x=0 Exercise: Show this

n−1
τ =0
•  By  the  Cayley  Hamilton  Theorem  2.1,  if                                    for  
CAk x = 0
k=0,  1,  …,  n-­‐‑1  then                                for  all  k>n-­‐‑
CAk x = 0 1
Exercise: Show this

4.29
Observability
•  Therefore  a  state  x  is  unobservable  if  an  only  if Qx = 0
⎡ C ⎤
⎢ ⎥
⎢ CA ⎥ ∈R np×n
Q=
⎢  ⎥
⎢ n−1 ⎥
⎣ CA ⎦

•  If  Q  is  full  rank  then  the  only  unobservable  state  is  0


•  In  this  case,  system  is  observable  since
Ce Aτ (x0 − x̂0 ) = 0 for all τ ∈[0,t] ⇔ x0 = x̂0

Theorem 4.3: The system is observable over [0, t] if and only if


the rank of the matrix Q is n

4.30
Example:  OpAmp  circuit  (p.  4.20)

⎡ 0 −1 ⎤ 1
•  Q
                                                                                                                           therefore  it  is  observable
=⎢ ⎥ ⇒ det(Q) = ≠ 0
⎣1 C 1 R0C ⎦ C
•  We  are  only  measuring  one  of  the  two  states  directly
•  We  can  infer  the  value  of  the  other  state  by  its  effect  on  the  
measured  state  through  the  dynamics  encoded  in  A
•  Roughly  speaking  use  measured  state  +  all  its  derivatives  to  
deduce  the  value  of  the  unmeasured  states

4.31
Observability  Gramian
One  can  also  construct  and  observability  gramian
t Exercise:  Show  that

W (t) = e C Ce d τ ∈R
T
A τ T A τ n×n

O W (t) = W T
(t) ≥ 0
0 O O


Fact 4.5: The system is observable over [0, t] if and only
if WO(t) is invertible. If the system is observable over some
[0, t] then it is also observable over all [0, t’]

Notes
–  Checking  the  rank  of  matrix  Q  is  easier Corollary 4.2: Set of
–  Rank  of  Q  at  most  n  (n  columns) unobservable states
–  Time  of  observation  is  immaterial equal to Null(Q)

4.32
Output  derivative  interpretation
 =
Consider  differentiating  y(t)  along x(t) Ax(t) + Bu(t)
y(t) = Cx(t) + Du(t)
y (t) = Cx(t)
 + Du(t)
 = CAx(t) + CBu(t) + Du(t)

y(t) = CA2 x(t) + CABu(t) + CBu(t)
  + Du(t)


⎡ y(0) ⎤ ⎡ ⎤ ⎡ ⎡ ⎤
⎢ ⎥ ⎢ C D 0  0 ⎤ ⎢ u(0) ⎥
⎥ ⎢ ⎥
⎢ y (0) ⎥ ⎢ CA ⎥ x(0) + ⎢ CB D 
 0 ⎥ ⎢ u(0) ⎥
⎢ ⎥=⎢  ⎥ ⎢ CAB CB  0 ⎥⎢ ⎥
⎢  ⎥ ⎢ ⎢  ⎥
n−1 ⎥ ⎢ n−2 n−3 ⎥
⎢⎣ y (n−1) (0) ⎥⎦ ⎣ CA ⎦ ⎣ CA B CA B  D ⎦ ⎢ u (n−1) (0)
⎣ ⎥⎦

Y = Qx(0) + KU Y ∈R , K ∈R
np np×nm
,U ∈R nm

4.33
Output  derivative  interpretation
Y = Qx(0) + KU
Measured Q full rank Known
•  System  of  linear  equations  to  be  solved  for  x(0)
•  If  p=1, Q ∈R n×n has rank n ⇒ Q invertible

x(0) = Q −1 (Y − KU )
•  If  p>1,  more  equations  than  unknowns,  least  squares  
solution.  If  Q  has  rank  n,  pseudo-­‐‑inverse  (Fact  2.14)

( )
−1
x(0) = Q Q T
Q T (Y − KU )

4.34
But  …
•  Differentiating  measurements  is  a  bad  idea
•  Noise  gets  amplified
•  Intuition:  Sinusoidal  signal  corrupted  by  small  
amplitude,  high  frequency  noise
y(t) = asin(ω t) + bsin(ω n t) b  a, ω n  ω
a
•  Signal-­‐‑to-­‐‑noise  ratio: SNR =  1
b
aω a
y (t) = ω a cos(ω t) + ω n bcos(ω n t) ⇒ SNR = 
bω n b
aω 2

y(t) = −ω asin(ω t) − ω n bsin(ω n t) ⇒ SNR =
 2 2

bω n
2
bω n
•  Derivative  of  signal  soon  becomes  useless  

4.35
Observers
•  Instead  of  differentiating,  build  a  “filter”
•  Progressively  construct  estimate  of  the  state x (t) ∈R n
•  x (0) ∈R n
Start  with  some  (arbitrary)  initial  guess
•  Measure  y(t)  and  u(t)
•  Update  estimate  according  to
dx
(t) = Ax (t) + Bu(t) + L ⎡⎣ y(t) − Cx (t) − Du(t) ⎤⎦
dt
•  Mimic  evolution  of  true  state,  plus  correction  term
•  Gain  matrix  L
•  Error  dynamics
e(t) = x(t) − x (t) ⇒ e(t)
 = (A − LC)e(t)

4.36
Observers
Theorem 4.4: If the system is observable, then L can be chosen
such that eigenvalues of (A-LC) have negative real parts.

•  In  this  case  error  system  is  asymptotically  stable


•  Error  goes  to  zero e(t) ⎯t→∞⎯⎯ →0
State  estimate  converges  to  true  state x (t) ⎯⎯⎯ → x(t)
t→∞
• 
•  Convergence  arbitrarily  quick  by  choice  of  L
•  In  presence  of  noise  “transients”  may  be  bad
•  Kalman  filter:  Optimal  trade-­‐‑off  for  L  if
–  State  and  measurement  equations  
corrupted  by  noise
–  System  linear  and  noises  Gaussian

4.37
Kalman  decomposition
T ∈R n×n
There  exists  change  of  coordinates                                  invertible  such  that:

⎡ x̂1 (t) ⎤ ← controllable & observable
⎢ ⎥
⎢ x̂2 (t) ⎥ ← controllable & unobservable
x̂(t) = Tx(t) = ⎢ ⎥
⎢ x̂3 (t) ⎥ ← uncontrollable & observable
⎢ ⎥
⎢⎣ x̂4 (t) ⎥ ← uncontrollable & unobservable

⎡ 0 ⎤ ⎡

Â11 0 Â13
⎥ B̂1 ⎤
⎢ ⎥
⎢ Â21 Â22 Â23 Â24 ⎥ ⎢
 = TAT = ⎢
−1
⎥ , B̂ = TB = ⎢ B̂2 ⎥

⎢ 0 0 Â33 0 ⎥ ⎢ 0 ⎥
⎢ ⎥ ⎢
⎢⎣ 0 0 Â43 Â44 ⎥ ⎣ 0 ⎥⎦

Ĉ = CT −1 = ⎡ Ĉ1 0 Ĉ3 0 ⎤
⎢⎣ ⎥⎦
4.38
Kalman  decomposition
⎛ ⎡ Â 0 ⎤ ⎡ B̂1 ⎤⎞ ⎛ ⎡ Â Â13 ⎤ ⎡ ⎞
⎜ ⎢ 11 ⎥,⎢ ⎥⎟ controllable, ⎜ ⎢ 11 ⎥ , Ĉ Ĉ ⎤⎟ observable
Â33 ⎥ ⎢⎣
3 ⎥
⎜ ⎢ Â21 Â22 ⎥ ⎢ B̂2 ⎥⎟⎠ ⎜⎢ 0 1
⎦⎟
⎝⎣ ⎦ ⎣ ⎦ ⎝⎣ ⎦ ⎠

u(t) B̂1 C+O Ĉ1 y(t)


+
x̂1 (t)
B̂2 Ĉ3
Â21 Â13
C+O Â23 C+O
x̂2 (t) x̂3 (t)
Â24 Â43

C+O
x̂4 (t)
4.39
Stabilizability  and  detectability

Definition: The system is detectable if all eigenvalues of Â22 and

Â44 in the Kalman decomposition have negative real part.

Can  design  observer  for  observable  part ⎛ ⎡ Â Â ⎤ ⎞
13 ⎥ ⎡
with  overall  observation  error  decaying ⎢ ⎜ ⎢ 11
, Ĉ1 Ĉ3 ⎤⎟
⎜ 0 Â33 ⎥ ⎢⎣ ⎥⎦⎟
to  zero ⎝⎣ ⎦ ⎠


Definition: The system is stabilizable if all eigenvalues of Â33 and
Â44 in Kalman decomposition have negative real part.

Can design controller for controllable part ⎛ ⎡ Â 0 ⎤ ⎡ B̂ ⎤⎞
which ensures overall system ⎜ ⎢ 11 ⎥ , ⎢ 1 ⎥⎟
⎜ ⎢ Â21 Â22 ⎥ ⎢ B̂2 ⎥⎟
asymptotically stable ⎝⎣ ⎦ ⎣ ⎦⎠
4.40
Signal-­‐‑  und  Systemtheorie  II  
D-­‐‑ITET,  Semester  4  
 
Notes  5:  Continuous  LTI  
systems,  frequency  domain
John Lygeros

Automatic  Control  Laboratory,  ETH  Zürich


www.control.ethz.ch
Laplace  Transform
•  Convert  time  function  f(t)  to  a  complex  variable  function  F(s)

f : R → R F : C → C


f (t) 

L
L

−1 
F (s) { } ∫
F(s) = L f (t) = f (t)e dt − st

0

•  Recall  that  we  assume  that  f(t)=0  for  all  t<0  (p.  0.22)
•  Can  also  be  defined  for  matrix  valued  functions

f :R→R n×m
F :C→ C n×m

by  taking  the  integral  element  by  element

5.2
Laplace  Transform:  Properties
Assumption: The function f(t) is such that the integral can be
defined, i.e. f (t)e − st ⎯t→∞
⎯⎯ → 0 “quickly enough”

{ }
•  Linearity L a1 f (t) + a2 g(t) = a1F(s) + a2G(s)

{ −at
}
•  s  shift L e f (t) = F(s + a)

⎧d ⎫
•  Time  derivative L ⎨ f (t) ⎬ = sF (s) − f (0)
⎩ dt ⎭
{ }
•  Convolution L ( f * g)(t) = F (s)G(s)
Exercise:  Prove  these  
properties  using  the   Recall discussion
definition on p. 0.22
5.3
Laplace  Transform:  Useful  functions
{ }
A.  Dirac  function L δ (t) = 1

B.  Step  function L {1} = 1


s
{ }
C.  Exponential  function L e −at = 1
s+a
D.  Sinusoidal  functions
L {sin(ω t)} = ω
Exercise:  Prove  C  using  the  s  shift  
property.  Prove  D  using  C.  Prove  
s2 + ω 2 B  using  A  and  the  time  derivative  
{ }
L cos(ω t) = s
s2 + ω 2
property.

Initial value theorem: lim f (t) = lim sF (s) Whenever all


t→0 s→∞

Final value theorem: lim f (t) = lim sF(s) limits exist


t→∞ s→0

5.4
Inverse  Laplace  Transform
•  Defined  as  an  integral Exercise:  Compute  the  
Laplace  transform  of:
•  Laplace  transforms  of  interest  here  
f (t) = t, f (t) = t n ,
will  be  proper,  rational  functions
–  Ratio  of  two  polynomials  in  s f (t) = te −at ,
–  Degree  of  numerator  less  than  or f (t) = e −at cos(ω t),
equal  to  degree  of  denominator
d2
•  In  this  case  use  partial  fractions g(t) = 2 f (t),
dt
•  Example: f (t) = sin(ω t + θ )

−1 1 ⎫ −1 ⎧ 1 ⎫ −1 ⎧ 1 1 ⎫
L ⎨ 2 ⎬= L ⎨ ⎬= L ⎨ − ⎬
⎩ s + 3s + 2 ⎭ ⎩ (s + 1)(s + 2) ⎭ ⎩s +1 s + 2⎭
−1 ⎧ 1 ⎫ −1 ⎧ 1 ⎫ −t −2t
=L ⎨ ⎬− L ⎨ ⎬=e −e
⎩ s + 1⎭ ⎩s + 2⎭
5.5
Back  to  LTI  systems
Time domain: dx(t)
= Ax(t) + Bu(t)
dt
y(t) = Cx(t) + Du(t)
x(t) ∈R n , u(t) ∈R m , y(t) ∈R p , t ∈R
Take Laplace Transform
⎧ dx(t) ⎫
L⎨ { }
⎬ = L Ax(t) + Bu(t) ⇒ sX (s) − x(0) = AX (s) + BU (s)
⎩ dt ⎭
−1 −1
Laplace domain: X (s) = (sI − A) x0
+ (sI − A) BU (s)
Y (s) = CX (s) + DU (s)
X (s) ∈C n , U (s) ∈C m , Y (s) ∈C p , s ∈C

5.6
Comparison  with  time  domain
•  Time  domain  solution
t

x(t) = e At x0 + ∫ e A(t−τ ) Bu(τ ) d τ


0 Convolution
•  Take  Laplace  transform
⎧⎪ t A(t−τ ) ⎫⎪
{ } { }
L x(t) = L e At x0 + L ⎨ ∫ e
⎪⎩ 0
Bu(τ ) d τ ⎬
⎪⎭
⇒ X (s) = L {e } At
{ }
x0 + L e At BU (s)
•  Comparing

{ }
L e At = (sI − A) −1 ∈C n×n

5.7
Example  (p.  3.18)
R L
⎡ 1 ⎤
⎢ 0 ⎥ ⎡ v (t) ⎤ ⎡ 0 ⎤
d ⎡⎢ vC (t) ⎤⎥ ⎢ C ⎥⎢ C ⎥ + ⎢ 1 ⎥ vs (t)
+
vs (t ) C =
⎥ ⎢ iL (t) ⎥ ⎢ ⎥

dt ⎢ iL (t) ⎥ ⎢ 1 R
⎣ ⎦ ⎢ − − ⎥⎣ ⎦ ⎢⎣ L ⎥⎦
⎣ L L ⎦

•  For  R=3,  C=0.5,  L=1


⎡ 0 2 ⎤ ⎡ s −2 ⎤
A=⎢ ⎥ ⇒ (sI − A) = ⎢ ⎥
⎣ −1 −3 ⎦ ⎣ 1 s+3 ⎦
•  Laplace  transform  of  state  transition  matrix

−1 1 ⎡ s+3 2 ⎤
(sI − A) = 2 ⎢ ⎥
s + 3s + 2 ⎣ −1 s ⎦

5.8
Example:  Transition  Matrix
⎧⎪ ⎡ s + 3 2 ⎤ ⎫⎪
−1
{ −1
Φ(t) = L (sI − A) = L ⎨ 2 } −1 1

⎪⎩ s + 3s + 2 ⎣ −1 s ⎦ ⎪⎭
⎥⎬

⎧⎡ s+3 2 ⎤⎫
⎪⎢ 2 ⎥⎪
⎪ ⎪
= L−1 ⎨ ⎢ s + 3s + 2 s + 3s + 2 ⎥ ⎬
2

⎪ ⎢⎢ −1 s ⎥⎪

⎪ ⎣ s 2 + 3s + 2 s 2 + 3s + 2 ⎦ ⎪
⎩ ⎭
⎧⎡ 2 1 2 2 ⎤⎫
⎪⎢ − − ⎥⎪
⎪ ⎪
= L−1 ⎨ ⎢ s + 1 s + 2 s + 1 s + 2 ⎥ ⎬
⎪ ⎢ −1 + 1 −1
+
2 ⎥⎪
⎪⎩ ⎢⎣ s + 1 s + 2 s + 1 s + 2 ⎥⎦ ⎪⎭
⎡ 2e −t − e −2t 2e −t − 2e −2t ⎤
Φ(t) = ⎢ ⎥ As  before!
−t
⎢⎣ −e + e
−2t
−e −t + 2e −2t ⎥⎦ (p.  3.19)

5.9
Example:  Step  transition  (p.  3.21)
ZST  with  input                                                                    (recall  that                                                            )
vs (t) = V for t ≥ 0 vs (t) = 0 for t < 0
Laplace  transform   Vs (s) = V s
⎡ 2 ⎤
⎢ ⎥
−1 ⎢ s(s + 1)(s + 2) ⎥
X (s) = (sI − A) BVs (s) = ⎢ ⎥ V
s
⎢ ⎥
⎢⎣ s(s + 1)(s + 2) ⎥⎦
⎡ 1 2 1 ⎤
⎢ − + ⎥
= ⎢ s s + 1 s + 2 ⎥V No  need  to  compute  
⎢ 1 1 ⎥ entire  (sI-­‐‑A)-­‐‑1,  just  
⎢ − ⎥ second  column
⎣ s + 1 s + 2 ⎦
⎡ −2e −t + e −2t + 1 ⎤
x(t) = L −1
{ }
X (s) = ⎢
e −t − e −2t
⎥V
⎢⎣ ⎥⎦
5.10
Example:  Step  transition
⎡ 2 ⎤
⎢ ⎥
⎢ s(s + 1)(s + 2) ⎥V
X (s) = ⎢ ⎥
s
⎢ ⎥
⎢⎣ s(s + 1)(s + 2) ⎥⎦
Initial  value  theorem x(0) = limt→0 x(t) = lim s→∞ sX (s)
⎡ 2 ⎤
⎢ ⎥
x(0) = lim s→∞ ⎢⎢
(s + 1)(s + 2) ⎥V = ⎡ 0 ⎤ (ZST)
⎥ ⎢ ⎥
s ⎣ 0 ⎦
⎢ ⎥
⎢⎣ (s + 1)(s + 2) ⎥⎦
⎡ ⎤

2

(s + 1)(s + 2) ⎥ ⎡ V ⎤
Final  value  theorem limt→∞ x(t) = lim s→0 ⎢⎢ ⎥ V =⎢ ⎥
s ⎣ 0 ⎦
⎢ ⎥
⎢⎣ (s + 1)(s + 2) ⎥⎦

5.11
Example:  Sinusoidal  input
ZST  with  input vs (t) = V sin(t)
Laplace  transform   Vs (s) = V s 2 + 1
⎡ 2 ⎤
⎢ 2 ⎥
−1 ⎢ (s + 1)(s + 1)(s + 2) ⎥
X (s) = (sI − A) BVs (s) = ⎢ ⎥V
s
⎢ 2 ⎥
⎢⎣ (s + 1)(s + 1)(s + 2) ⎥⎦
⎛ −3s + 1 1 2 ⎞
VC (s) = X 1 (s) = ⎜ 2 + − ⎟ V
⎝ 5(s + 1) s + 1 5(s + 2) ⎠

3V V −t 2 −2t
vC (t) = − cos(t) + sin(t) +Ve − Ve
5 5 5
External  input   Eigenvalue  
response response
5.12
Example:  Sinusoidal  input
•  The  system  is  stable,  so  as t → ∞
2 Transient  
Ve − Ve −2t → 0
−t
solution
5
3V V Steady  state  
vC (t) → − cos(t) + sin(t) solution
5 5
•  In  general,  for  stable  systems  with  sinusoidal  input  steady  
state  solution  is  also  sinusoidal  with
–  Same  frequency  as  input
–  Amplitude  and  phase  determined  by  system  matrices

5.13
Transfer  function
•  Consider  ZSR X (s) = (sI − A) −1 BU (s)
Y (s) = CX (s) + DU (s)

(
Y (s) = C(sI − A) −1 B + D U (s) )
•  Transfer  function
−1
G(s) = C(sI − A) B + D ∈C p×m

•  Summarizes  system  input-­‐‑output  behavior


Y (s) = G(s)U (s)
•  In  the  RLC  example s
–  If  we  measure  y=iL C = ⎡ 0 1 ⎤ , D = 0 ⇒ G(s) =
⎣ ⎦ (s + 1)(s + 2)

⎡ ⎤ 2
–  If  we  measure  y=vC C = 1 0 , D = 0 ⇒ G(s) =
⎣ ⎦ (s + 1)(s + 2)
5.14
Transfer  function  structure
•  System  called
–  Single  input,  single  output  (SISO)  if  m=p=1
–  Multi-­‐‑input,  multi-­‐‑output  (MIMO)  if  m  or  p  >1
•  SISO  à  B,  CT  vectors  of  dimension  n,  D  a  real  number

Cadj(sI − A)B Cadj(sI − A)B + D det(sI − A)


G(s) = +D= ∈C
det(sI − A) det(sI − A)

•  All  entries  are  rational  functions  of  s


System zeros
•  For  SISO  systems
(s − z1 )(s − z2 )(s − zk )
G(s) = ∈C
(s − p1 )(s − p2 )(s − pn )
System poles
5.15
Proper  and  strictly  proper  transfer  function

•  Consider  SISO  system  described  by  rational  G(s)
•  Transfer  function  is  called  proper  if  
numerator  degree          denominator  degree

Fact 5.1: SISO transfer functions arising from state space



descriptions of LTI systems are always proper

•  Transfer  function  is  called  strictly  proper  if


numerator  degree  <  denominator  degree
Fact 5.2: SISO transfer functions arising from state space

descriptions of LTI systems are strictly proper if and only if D=0
•  Input  affects  output  only  through  system  dynamics
5.16
Proper  and  strictly  proper  transfer  function

(s − z1 )(s − z2 )(s − zk )
G(s) = ∈C
(s − p1 )(s − p2 )(s − pn )
•  Proper  à  k      n,  strictly  proper  à  k<n

•  If  G(s)  is  strictly  proper,  expanding  polynomials

b1s n−1 + b2 s n−2 + + bn


G(s) =
s n + a1s n−1 + a2 s n−2 + + an
•  Some  bi  may  be  zero
•  If  no  pole-­‐‑zero  cancellations  the  denominator  is  the  
characteristic  polynomial  of  A
•  i.e.  poles  are  the  eigenvalues  of  A
•  For  simplicity  we  will  mostly  consider  SISO,  strictly  
proper  transfer  functions  in  the  rest  of  these  notes
5.17
Transfer  function  and  impulse  response
t

•  SISO  systems,  ZSR y(t) = C ∫ e


A(t−τ )
Bu(τ ) d τ + Du(t)
0

•  Output  impulse  response:  ZSR  with  u(t)=δ(t)


K (t) = Ce At B + Dδ (t) (c.f. Notes 3)
•  Taking  Laplace  transform

{ }
L{K (t)} = L Ce At B + Dδ (t) = C(sI − A) −1 B + D

•  Transfer  function  is  Laplace  transform  of  output  


impulse  response
•  ZSR:  impulse  response-­‐‑input  convolution y(t) = (K *u)(t)
{ } { }
Y (s) = L (K *u)(t) = L K (t) U (s) = G(s)U (s)

5.18
Transfer  function  and  stability
•  From  our  knowledge  of  time  domain  solutions
–  If  poles  are  distinct  system  is
•  Asymptotically  stable  if  and  only  if Re[ pi ] < 0,∀i
•  Stable  if  and  only  if ∀i Re[ pi ] ≤ 0
•  Unstable  if  and  only  if ∃i : Re[ pi ] > 0
–  If  poles  are  repeated  system  is
•  Asymptotically  stable  if  and  only  if Re[ pi ] < 0,∀i
•  Unstable  if ∃i : Re[ pi ] > 0
•  If                                                                                                                    system  may  be  stable  
∀i Re[ pi ] ≤ 0 and ∃i Re[ pi ] = 0
or  unstable,  depending  on  partial  fraction  expansion
(cf.  “depending  on  eigenvectors”  of  matrix  A,  Notes  3)
•  Provided  there  are  no  pole  zero  cancellations

5.19
Block  diagrams

G1(s) G2(s) ⇔ G2(s)G1(s)

G1(s)
+
+
+
⇔ G2(s)+G1(s)
G2(s)

+ G1(s)
+-
⇔ [1+G1(s)G2(s)]-1G1(s)
G2(s)
Caution:  MIMO  transfer  functions  in  general  matrices!
5.20
Block  diagrams

u y
+
K1(s) + K2(s) G(s)
-

K3(s)
Exercise:  In  the  
−1
Y (s) = [1+ G(s)K 2 (s)K 3 (s)] G(s)K 2 (s)K1 (s)U (s) SISO  case,  show  
that  if  G(s)  strictly  
proper,  K1(s),  K2(s),    
•  In the SISO case: Composition or rational transfer
K3(s)  proper  then  
functions is also a rational transfer function closed  loop  
•  Properties of “closed loop” system studied using transfer  function  is  
the same tools strictly  proper

5.21
Frequency  response
•  In  RLC  example,  steady  state  response  to  sinusoidal  
input  is  sinusoidal
•  More  generally  consider  proper,  stable  SISO  system  
with  transfer  function  G(s)
•  Apply  u(t)=sin(ωt)
•  Output  secles  to  sinusoid  y(t)=Ksin(ωt+φ)  with
–  The  same  frequency,  ω  
–  Amplitude K = G( jω ) = Re[G( jω )]2 + Im[G( jω )]2
⎛ Im[G( jω )] ⎞
φ = ∠G( jω ) = tan ⎜
–  Phase −1

⎝ Re[G( jω )] ⎟⎠
ω
•  Shown  by  partial  fraction  expansion  of  
Y (s) = G(s) 2
(s + ω 2 )

5.22
Frequency  response
•  Response  of  system  to  sinusoids  at  different  
frequencies  called  the  frequency  response
•  Frequency  response  important  because
–  Sinusoids  are  common  inputs
–  Directly  related  to  any  other  input  by  Fourier  transform
–  Frequency  response  tells  us  a  lot  about  system  behavior
–  E.g.  Will  it  be  stable  under  various  interconnections?
•  Frequency  response  usually  summarized  graphically
G( jω ) vs. ω , log  plot  
–  Bode  plots:  Log-­‐‑log  plot                                                      lin-­‐‑ ∠G( jω ) vs. ω
–  Nyquist  plot:  G(jω)  in  polar  coordinates,  parameterized  by  ω
G( jω ) vs. ∠G( jω ),
–  Nichols  chart:  Log-­‐‑lin  plot                                                                              
parameterized  by  ω  

5.23
Bode  plots  (bode.m)
log (ω ) (in rad/sec) axes
Pair  of  plots,  x-­‐‑axis  the  same                                                                  ,  y-­‐‑
( )
20log G( jω ) (in dB) and ∠G( jω ) (in degrees)

RLC example (p. 5.14)


2 s
G(s) = G(s) =
(s + 1)(s + 2) (s + 1)(s + 2)
5.24
Nyquist  plot  (nyquist.m)
Plot  of  Im[G(jω)]  vs.  Re[G(jω)]  parameterized  by  ω

RLC example (p. 5.14)


2 s
G(s) = G(s) =
(s + 1)(s + 2) (s + 1)(s + 2)

5.25
Resonance
•  Appears  in  second  order  systems  (two  poles)
•  Bode  magnitude  plot  has  maximum  at  some  frequency
•  Sinusoidal  inputs  around  this  frequency  get  amplified
•  Important  consequences  for  performance
•  Second  order  systems  very  common  in  practice
•  Example:  Simplified  suspension  model

f (t) x(t) + dx(t)


M  + kx(t) = − f (t)
x(t) Output:
position X (s) 1
=− 2
Input: F (s) Ms + ds + k
Force
Second  order  
transfer  function
5.26
Resonance
•  Second  order  transfer  functions  of  interest  look  like
Kω n2
G(s) = 2 , ωn > 0
s + 2ζω n s + ω n
2

k d 1
•  For  suspension  example: ω n = , ζ= , K =−
M 2 km k
Natural   Damping  
frequency Gain
ratio

•  Frequency  response   Kω n2
G( jω ) =
(ω ) + (2ζω ω )
2 2
Kω 2 2
−ω 2

G( jω ) = n n n

(ω n2 − ω 2 ) + j(2ζω nω ) ⎛ 2ζω nω ⎞
∠G( jω ) = −tan ⎜ 2 −1
2⎟
⎝ ωn − ω ⎠
5.27
Resonance
1.  For  stability  need ζ ≥0 Exercise:  Verify  1-­‐‑5
2.  For                      poles  real  (over-­‐‑
ζ ≥1 damped  system)
3.  For                      poles  real  and  equal  (critical  damping)
ζ =1
4.  For                                poles  complex  (under-­‐‑
0 <ζ <1 damped  system)
5.  For                      poles  imaginary  (undamped  system)  
ζ =0
1
ζ≥
6.  For                              magnitude  Bode  plot  decreasing  in   w  
2
1
7.  For                                          magnitude  Bode  plot  has  a  maximum
0 ≤ζ <
2
Exercise:  Take  
K
ω = ω n 1− 2ζ 2
at                                                        and      
G( jω ) = the  derivative  
2ζ 1− ζ 2 G( jω )
of                              to  
verify  6-­‐‑7

5.28
Example:  AFM  Resonances

5.29
Transfer  function  realization
•  Time  domain  description  à  unique  transfer  function
dx ⎫
(t) = Ax(t) + Bu(t) ⎪ −1
dt ⎬ ⎯ ⎯
→ G(s) = C(sI − A) B+D
y(t) = Cx(t) + Du(t) ⎪

•  Transfer  function  à  unique  time  domain  description??
⎧ dx
(s − z1 )(s − z2 )(s − zk ) ? ⎪ (t) = Ax(t) + Bu(t)
G(s) = ⎯⎯→ ⎨ dt
(s − p1 )(s − p2 )(s − pn ) ⎪ y(t) = Cx(t) + Du(t)

•  Given  G(s),  choice  of  A,  B,  C,  D  such  that                                                      
C(sI
                                                                 known  as  a  realization  of  G(s)
− A) −1 B + D = G(s)
•  Clearly  not  unique,  e.g.  coordinate  change x̂ = Tx, det(T ) ≠ 0

5.30
Realization:  SISO,  strictly  proper  system
b1s n−1 + b2 s n−2 + + bn
•  SISO,  strictly  proper  system G(s) = n
s + a1s n−1 + a2 s n−2 + + an

⎡ 0 0 … −an ⎤ ⎡ bn ⎤
⎡ 0 1 0 … 0 ⎤ ⎡ 0 ⎤ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ 1 0 … −an−1 ⎥ ⎢ bn−1 ⎥
⎢ 0 0 1 … 0 ⎥ ⎢ 0 ⎥ ⎢ ⎥ ⎢ ⎥
 =⎢
x(t)      ⎥ x(t) + ⎢  ⎥ u(t)  =⎢
x(t) 0 1 … −an−2 ⎥ x(t) + ⎢ bn−2 ⎥ u(t)
⎢ 0 0 0 … 1 ⎥ ⎢ 0 ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢     ⎥ ⎢  ⎥
⎢ −an −an−1 −an−2 … −a1 ⎥ ⎢ 1 ⎥ ⎢ 0 0 … −a1 ⎥ ⎢ b1 ⎥
⎣ ⎦ ⎣ ⎦
⎣ ⎦ ⎣ ⎦
y(t) = ⎡ bn bn−1 bn−2 … b1 ⎤ x(t)
⎣ ⎦ y(t) = ⎡ 0 0 … 1 ⎤ x(t)
⎣ ⎦

Controllable canonical form Observable canonical form


Exercise:  Show  that  both  the  controllable  
and  the  observable  canonical  forms  are  
realizations  of  G(s)

5.31
Uncontrollable  and  unobservable  systems
⎡ ⎤ ⎡ 1 ⎤ ⎡ −1 0 ⎤ ⎡ 1 ⎤
−1 1  =⎢
 =⎢
x(t) ⎥ x(t) + ⎢ ⎥ u(t), x(t) ⎥ x(t) + ⎢ ⎥ u(t),
⎣ 0 1 ⎦ ⎣ 0 ⎦ ⎣ 1 1 ⎦ ⎣ 0 ⎦
y(t) = ⎡ 1 0 ⎤ x(t) y(t) = ⎡ 1 0 ⎤ x(t)
⎣ ⎦ ⎣ ⎦
1
1.  In  both  cases  transfer  function G(s) =
s +1
2.   = −x(t) + u(t), y(t) = x(t) ∈R Exercise:  Show  
Same  as x(t)
points  1-­‐‑5
3.  Original  state  space  system  unstable
4.  Transfer  function  poles  have  negative  real  parts!
5.  Pole-­‐‑zero  cancellation  of  term  corresponding  to  
uncontrollable/unobservable  part
6.  Can  be  shown  in  general  using  Kalman  decomposition
5.32
In  summary
•  Transfer  function  alternative  system  description  to  
state  space
•  Closely  related,  not  equivalent
•  Advantages
+  Coordinate  independent
+  Easier  to  manipulate  for  system  composition
+  Easier  to  compute  response  to  “complicated”  inputs
+  Immediate  connection  to  steady  state  sinusoidal  response
+  May  also  work  for  systems  that  do  not  have  state  space  
description  (e.g.  delay  elements)
•  Disadvantages
–  Less  natural  in  terms  of  physical  laws
–  Used  mostly  for  ZSR
–  May  contain  less  information  than  state  space  description
–  Unobservable  and  uncontrollable  parts  lost

5.33
Signal-­‐‑  und  Systemtheorie  II  
D-­‐‑ITET,  Semester  4  
 
Notes  6:  Discrete  time  LTI  
systems
John Lygeros

Automatic  Control  Laboratory,  ETH  Zürich


www.control.ethz.ch
Sampled  Data  Systems
•  Computers  operate  on  bit  streams
•  Value  and  time  quantization
1.  Variables  can  take  finitely  many  values
2.  Operations  performed  at  fixed  “clock”  period

6.2
Sampled  Data  Systems
•  In  “embedded”  computational  systems  digital  
computer  has  to  interact  with  analog  environment.
–  Measurements  of  physical  quantities  processed  by  
computers
–  Decisions  of  computer  applied  to  physical  system
•  Requires  transformation  of  real  valued  signals  of  real  
time  to  discrete  valued  signals  of  discrete  time  and  
vice-­‐‑versa
–  Analog  to  digital  conversion  (A/D  or  ADC)
–  Digital  to  analog  conversion  (D/A  or  DAC)

6.3
Sampled  Data  Systems
•  Usually  value  quantization  is  quite  accurate.
•  Here  we  ignore  value  quantization,  we  concentrate  
on  time  quantization.
•  Assume:
–  “ADC”àsample  every  T
–  “DAC”àzero  order  hold

6.4
Sampled  Data  Systems

COMPUTER

Z.O.H

SYSTEM
 = Ax(t) + Bu(t)
x(t)
y(t) = Cx(t) + Du(t)

6.5
Sampled  Data  Linear  Systems
How  does  linear  system  with  sampling  and  zero
order  hold  look  like  from  computer?
x(t)
 = Ax(t) + Bu(t) A ∈ n×n
B ∈ n×m


y(t) = Cx(t) + Du(t) C ∈ p×n D ∈ p×m

u(t) = uk for all t ∈[kT ,(k + 1)T )
y k
= y(kT )
t ∈[kT ,(k + 1)T )
For  
t
x(t) = e A(t−kT )
x(kT ) + ∫ e A(t−τ ) Bu(τ )dτ
kT

6.6
Sampled  Data  Linear  Systems
( ) ⎛ e A((k +1)T −τ ) Bdτ ⎞ uk
(k +1)T
x (k + 1)T = e x(kT ) + ∫
AT
⎝ kT ⎠
= e x(kT ) + ∫ e A(T −τ ) Bdτ ⎞ uk
⎛ T
AT
⎝ 0 ⎠
y(kT ) = Cx(kT ) + Du(kT )
(k +1)T T
∫ Bdτ = ∫ e A(T −τ ) Bdτ
A((k +1)T −τ )
Exercise: Show that e
kT 0

Let xk = x(kT ), uk = u(kT ), yk = y(kT ). Then


T
xk +1 = Axk + Buk with A = e , B = ∫ e A(T −τ ) B d τ
AT
0
yk = Cxk + Duk C = C, D = D
6.7
Discrete  Time  Linear  Systems

Solution: Given x̂0 ∈R n and uk ∈R m , k = 0,1,…, N −1


solution consists of two sequences xk ∈R n ,k = 0,1,…, N
and yk ∈R , k = 0,1,…, N such that:
p

x0 = x̂0
xk +1 = Axk + Buk , k = 0,1,…, N −1
yk = Cxk + Duk , k = 0,1,…, N
6.8
Solution  of  Discrete  Time  Linear  Systems
k −1
xk = A x̂0 + ∑ A
k k −i−1
Bui
i=0
Exercise: Prove
this by induction
ZIT ZST

ZST: Discrete time convolution of


–  Input
–  State impulse response

6.9
Computation  of  solution
•  Hard  part  is  computation  of              (c.f.              )
•  If  matrix  is  diagonalizable

⎡ λ k ... 0 ⎤
⎢ 1 ⎥
Λk = ⎢    ⎥
⎢ ⎥
⎢ 0 ... λ k
n ⎥
⎣ ⎦
Definition: The system is called stable if for all ε > 0 there exists
δ > 0 such that x0 ≤ δ ⇒ xk ≤ ε for all k = 0,1,… It is called
asymptotically stable if in addition lim k→∞ xk = 0 . A system that
is not stable is called unstable.
6.10
Stability,  diagonalizable  matrices
•  If  matrix  diagonalizable,  Ak  linear  combination  of
λi
–    = σ i ± jω i , λi = σ i2 + ω i2 Exercise: Show that if
A is diagonalizable and
–    ∀i, Re ⎡⎣λ1 ⎤⎦ < 0 then
A = e AT
–    is diagonalizable and
∀i, λi < 1 .
–   

Theorem 6.1: System with diagonalizable A matrix is:


•  Stable if and only if ∀i λi ≤ 1
•  Asymptotically stable if and only if ∀i λi < 1
•  Unstable if and only if ∃i : λi > 1
6.11
Stability,  non-­‐‑diagonalizable  matrices

Theorem 6.2: The system is:


•  Asymptotically stable if and only if ∀i λi < 1
•  Unstable if ∃i : λi > 1

•  As  before,  eigenvalues  not  enough  to  determine  stability


∀i λi ≤ 1
•  Case                                          may  be  either  stable,  or  unstable  
depending  on  repetition  pa^ern  of  eigenvalues  with
λi = 1
                           (determined  by  eigenvectors)
•  The  analogy  to  continuous  time  systems  is  not  always  
perfect  however!  

6.12
Deadbeat  response
•  Assume  all  eigenvalues  of  A  are  zero:

•  Then                              for  some                          (nilpotent  matrix)


•  Example:

In general proved
using Jordan form

xk = Ak x0 = 0 for all k ≥ N
•  Then  
•  ZIT  gets  to  0  in  finite  time  and  stays  there.
•  This  never  happens  with  continuous  time  systems.
6.13
Coordinate  change
x̂k = Txk
•  Assume                              for  some  invertible
T ∈R n×n

•  In  the  new  coordinates  system  dynamics  are  


again  linear  time  invariant
x̂k +1 = Âx̂k + B̂uk
Exercise: Show this
yk = Ĉx̂k + D̂uk
with  
 = TAT −1 , B̂ = TB
−1
Ĉ = CT , D̂ = D

6.14
Energy  and  Power
•  Consider  “energy  like”  function:

•  “Power”:  change  of  energy  in  time

•  If                              (autonomous  system)

6.15
Stability  and  Energy
•  If                                                  then  energy Exercise: Show that
decreases  all  the  time R=RT


•  Natural  to  assume  that  system  is  stable

Theorem  6.3:                                  for  all  i=1,  2,  …,  n  if  and  only  
if  for  all                                                  the  equation                                                              
has  a  unique  solution  with                                            .

6.16
Controllability
•  System  is  controllable  if  we  can  steer  it  from  any  
x̂0 ∈R n
x̂ N
∈R
initial  condition                          to  any  final  condition            
n

using  appropriate  sequence


•  Assume N ≥ n
•  Define  again  controllability  matrix

P = [B AB 2
A B A n−1
B] ∈R n×nm

Theorem  6.4:  The  system  is  


controllable  if  and  only  if  P  has   Exercise: Prove this
rank  n.

6.17
Observability
•  System  is  observable  if  we  can  infer  the  state  
evolution                                                                            by  observing  the  
input  and  output  sequences
N ≥ n −1
•  Assume   ⎡ C ⎤
•  Define  again ⎢ ⎥
⎢ CA ⎥
observability Q = ∈R np×n
⎢  ⎥
matrix ⎢ n−1 ⎥
⎣ CA ⎦
Theorem  6.5:  The  system  is  
observable  if  and  only  if  Q   Exercise: Prove this
has  rank  n.
6.18
z  Transform
•  Time  function  fk  converts  to  a  complex  variable  function  F(z)

f :N→ R F :C→ C




fk   Z

Z
 F (z)
−1 
{ } ∑
F (z) = Z f k = fk z −k

k =0
•  We  implicitly  assume  that  fk=0  for  all  k<0  (cf.  p.0.22)
•  Can  also  be  defined  for  matrix  valued  functions  by  taking  
sum  element  by  element
z ∈C
•                             can  be  thought  of  as  unit  time  delay

fk z f k +1

6.19
z  Transform:  Properties
Assumption: The function fk is such that the sum converges

{ }
•  Linearity Z a1 f k + a2 g k = a1F(z) + a2G(z)
Exercise: Prove
{ }
•  Time  shift Z f k −k0 = z
− k0
F (z) these

{
•  Convolution Z ( f * g) k = Z } {∑ k
i=0 }
f i g k −i = F (z)G(z)
•  Some  common  functions:
–  Impulse  function Z {δ } = 1
k
(δ 0 = 1, δ k = 0 if k ≠ 0)
z
–  Step  function Z {1 } = (1k = 1 k ≥ 0,1k = 0 k < 0)
k
z −1
–  Geometric  progression Z {a } k
=
z
z−a
( a < 1)

6.20
Transfer  function
•  Assume
•  Take  z  transform  of  all  signals

(
Y (z) = ⎡C zI − A ) B + D ⎤U (z)
−1

⎢⎣ ⎥⎦
Transfer  
Function
Exercise:  Show  that  the  transfer  function  is  z-­‐‑transform  
of  “impulse  response”  (appropriately  defined!)
6.21
Transfer  function

( )
−1
G(z) = C zI − A B+D

•  Rational  function  of  z.


•  System  asymptotically  stable Poles  of  G(z)  have  
magnitude  less  
than  1
•  If  system  uncontrollable/unobservable  pole  zero  
cancellations.    

6.22
Simulation
•  Simulation:  Numerical  solution  in  computer
•  Simulation  of  discrete  time  systems  (linear  or  
non-­‐‑linear)  is  very  easy  conceptually
•  Discrete  time  systems  can  also  help  understand  
the  simulation  of  continuous  time  systems
•  Consider  continuous  time,  LTI  system
 = Ax(t) + Bu(t)
x(t)
•  Given                                                                                          solution  is  
x̂0 ∈R n , u(⋅) : [0,T ] →  m
x(⋅) : [0,T ] →  n , with
t
x(t) = e (t) x̂0 + ∫ e A(t−τ ) Bu(τ ) d τ
At
0
6.23
Example:  RLC  circuit  (p.  3.18)
R L
⎡ 1 ⎤
⎢ 0 ⎥ ⎡ v (t) ⎤ ⎡ 0 ⎤
+ d ⎡⎢ vC (t) ⎤⎥ ⎢ C ⎥⎢ C ⎥ + ⎢ 1 ⎥ vs (t)

vs (t ) C =
dt ⎢ iL (t) ⎥ ⎢ 1 ⎥ ⎢ iL (t) ⎥ ⎢ ⎥
R
⎣ ⎦ ⎢ − − ⎥⎣ ⎦ ⎢⎣ L ⎥⎦
⎣ L L ⎦

•  Solution depends on eigenvalues and eigenvectors


•  Determined by R, L, C
•  Consider autonomous case vs(t)=0 for all t

6.24
For  example
w2
w1

R=3W, L=1H, C=0.5F R=3W, L=1H, C=0.005F

λ2 = −2 < −1 = λ1 < 0 λi = −1.5 ± 14.06 j

6.25
For  example

R=0W, L=1H, C=0.005F


λi = ±14.14 j
6.26
Numerical  approximation
•  Approximate  the  solution  with  a  sequence
•  Divide  the  interval                      in  N  equal  subintervals
•  Let Simulation  step
•  We  approximate
x((k + 1)δ ) ≈ x(kδ ) + δ x(k
 δ)
= x(kδ ) + δ (Ax(kδ ) + Bu(kδ ))
•  So  we  set x0 = x(0) = x̂0 , xk = x(kδ ), uk = u(kδ )

xk +1 = (I + Aδ )xk + δ Buk
6.27
Numerical  approximation
Integration  using  Euler  method







First  order  approximation  of  the  equation

6.28
Zero  input  response
•  Consider  autonomous  system
•  Solution  is

•  First  order  approximation

•  First  order  approximation  good  if        “small”


6.29
Stability  of  numerical  approximation
•  How  small  should  the  step  be?
•  If  the  eigenvalues  of  A  have  negative  real  part,  then
       system  asymptotically  stable  and  
•  At  the  very  least  we  would  like  to  guarantee  that  
the  numerical  approximation  is  such  that
•  Assume  A  diagonalizable Eigenvalue
matrix  
(diagonal)

Eigenvector  
matrix
(invertible)
6.30
Stability  of  numerical  approximation
•  Then Exercise: Prove this

•  Discrete  time  system  asymptotically  stable  if  and  


only  if xk → 0 ∀x0 ∈R n ⇔ 1+ λiδ < 1 ∀i = 1,…,n
•  For  example,  if  λi  are  real  and  negative
2 Exercise: Repeat for
δ< complex eigenvalues
max | λi |
i=1,…,n

6.31
RLC  circuit  with  R=3W,  L=1H,  C=0.5F

δ = 0.01 δ = 0.05

6.32
RLC  circuit  with  R=3W,  L=1H,  C=0.5F
«Exact» Numerical
solution approximation Instability!

δ = 0.25 δ = 1.25

λ1 = −1, λ2 = −2 ⇒ δ < 1 for stability


6.33
Simulation
•  Simple  first  order  approximation  known  as  
“forward  Euler”  method
•  Another  approach  is  “backward  Euler”
x = Ax ⇒ xk +1 ≈ xk + δ Axk +1 ⇒ xk +1 ≈ (I − δ A) −1 xk
•  Care  also  needed  when  selecting  step  δ
•  Much  be^er  methods  than  Euler  exist
–  E.g.  Runge-­‐‑Ku^a,  variable  step,  high  order
–  Specialized  methods  for  “stiff”  systems,  hybrid  
systems,  differential-­‐‑algebraic  systems,  etc.
–  Coded  in  robust  numerical  tools  such  as  Matlab

6.34
Signal-­‐‑  und  Systemtheorie  II  
D-­‐‑ITET,  Semester  4  
 
Notes  7:  Nonlinear  systems

John Lygeros

Automatic  Control  Laboratory,  ETH  Zürich


www.control.ethz.ch
Nonlinear  systems
•  Most  of  this  course:  Dynamical  systems  modeled  by  linear  
differential  equations  in  state  space  form
 = Ax(t) + Bu(t)
x(t) x(t) ∈ n ,u(t) ∈ m , y(t) ∈ p
y(t) = Cx(t) + Du(t) A ∈ n×n , B ∈ n×m ,C ∈ p×n , D ∈ p×m

•  Last  few  lectures  return  to  more  general  systems


 = f (x(t),u(t))
x(t) x(t) ∈ n ,u(t) ∈ m , y(t) ∈ p
y(t) = h(x(t),u(t)) f :  n ×  m →  n ,h :  n ×  m →  p
•  Concentrate  on  continuous  time
•  Discrete  time  can  be  more  complicated
–  E.g.  the  “Population  Dynamics”  example  in  Notes  1  is  one  
dimensional  but  can  be  chaotic
7.2
Nonlinear  systems
•  More  general  than  linear  system,  hence  more  difficult

•  Concentrate  on  autonomous,  time  invariant  systems
 = f (x(t))
x(t) ( In the linear case x(t)
 = Ax(t))
•  Assume  function  f  is  LipschiP

∃λ > 0,∀x, x̂ ∈R , n
f (x) − f ( x̂) ≤ λ x − x̂
•  This  implies  existence  and  uniqueness  of  solutions
•  In  general  solution  cannot  be  computed  analytically
•  Simulation  methods  applicable  however
•  Look  into  the  following  issues
–  Invariant  sets
–  Stability  of  invariant  sets

7.3
Invariant  sets
•  Generalization  of  notion  of  equilibrium

Definition: A set of states S ⊆ R n is called invariant if


∀x0 ∈S,∀t ≥ 0, x(t) ∈S

 = f (x(t))
x(t)  means  the  solution  to                                                  starting  at  x
x(t) 0
•  Equilibrium  points  are  an  important  class  of  
invariant  sets  
Definition: A state x̂ ∈R n is called

Exercise:  Prove  that  if              is  
an equilibrium if
S = x̂ {}
an  equilibrium  then                          
f ( x̂) = 0 is  an  invariant  set

7.4
Equilibria
•  Linear  systems  have  a  linear  subspace  of  equilibria
–  Sometimes  only   xˆ = 0
–  More  generally,  the  null  space   Exercise:  Show  that  the  
of  the  matrix  A
 = Ax(t)
equilibria  of                                        
x(t)
coincide  with  the  null  

space  of  A  

•  Nonlinear  systems  can  have  many  isolated  equilibria
•  Example:  The  pendulum  from  Notes  1  has  2  equilibria
⎡ x2 (t) ⎤
⎢ ⎥ ⎡ 0 ⎤ ⎡ π ⎤ Exercise:  
 =⎢ d
x(t) g ⎥ ⇒ x̂ = ⎢ ⎥ , x̂ ' = ⎢ ⎥ Show  this  
⎢ − m x2 (t) − l sin x1 (t) ⎥ ⎣ 0 ⎦ ⎣ 0 ⎦
⎣ ⎦
(More  precisely,  number  of  pendulum  equilibria  is  
infinite,  but  they  all  coincide  physically  with  these  two)
7.5
Pendulum  for  d=0

Exercise:  Let xk +1 = f (xk )


be  a  nonlinear  system  in  
discrete  time  (cf.  p.1.27).  
The  equilibria  for  this  
system  are  given  by
x̂ = f ( x̂)
Show  that  equilibria  are  
invariant  sets  (cf.  p.1.29).

7.6
Shifting  equilibria  to  the  origin
•  It  is  often  convenient  to  “shift”  an  equilibrium  to  the  
origin  before  analyzing  the  system  behavior
•  This  involves  a  change  of  coordinates
w(t) = x(t) − x̂ ∈R n

•  In  the  new  coordinates  the  system  becomes

 = f (x(t)) = f (w(t) + x̂) = fˆ (w(t))


 = x(t)
w(t)

•  The  system  in  the  new  coordinates  has  an   Exercise:  
equilibrium  at ŵ = 0 ∈R n Show  this  

7.7
Limit  cycles
•  Observed  only  in  systems  of  dimension  2  or  more
Definition: A solution x(t) is called a periodic orbit if
∃T > 0,∀t ≥ 0, x(t + T ) = x(t)

•  Equilibria  define  trivial  periodic  orbits Exercise:  Show  that  


•  Limit  cycles:  Non-­‐‑trivial  periodic  orbits solution  starting  at  
•  Linear  systems  exhibit  either an  equilibrium  is  
–  Trivial  periodic  orbits  (equilibria) periodic  
–  Subspaces  of  periodic  orbits,  e.g.
⎡ 0 ω ⎤ Exercise:  Show  this  
 =⎢
x(t) ⎥ x(t) system  has  subspace  
⎢⎣ −ω 0 ⎥⎦ of  periodic  orbits  
•  Nonlinear  systems  can  also  have  non-­‐‑trivial,  isolated  
periodic  orbits  à  Limit  cycles
7.8
Example:  Van  der  Pol  oscillator
•  Developed  as  a  model  for  dynamics  of  vacuum  tube  
(transistor)  circuits
•  Under  certain  conditions  circuits  observed  to  oscillate

•  Van  der  Pol  showed  this  is  due  to  “nonlinear  
resistance”  phenomena
•  Second  order  differential  equation
 2 
θ (t) − ε (1− θ (t) )θ (t) + θ (t) = 0

Exercise:  Write  the  equation  for  the  


van  der  Pol  oscillator  in  state  space  
form.  Hence  determine  its  equilibria.

7.9
Example:  van  der  Pol  oscillator,  ε=1

Exercise:  Let
Stable                                          be  a  
xk +1 = f (xk )
limit nonlinear  system  
cycle in  discrete  time.  
How  would  you  
define  periodic  
orbits  and  limit  
cycles  for  this  
system?  
(cf.  p.1.28).
Unstable
equilibrium

7.10
Strange  aGractors
•  In  2D  continuous  time  equilibria  &  limit  cycles  as  bad  
as  it  gets  (Poincare-­‐‑Bendixson  Theorem)
•  In  higher  dimensions  stranger  things  may  happen
–  Invariant  tori
–  Chaotic  aeractors
•  Example:  Lorenz  equations
–  Developed  by  E.N.  Lorenz
–  To  capture  atmospheric  phenomena

x1 (t) = a(x2 (t) − x1 (t))


x2 (t) = (1+ b)x1 (t) − x2 (t) − x1 (t)x3 (t)
x3 (t) = x1 (t)x2 (t) − cx3 (t)

7.11
Chaotic  aGractor
•  For  some  parameter  values,  there  is  a  bounded  subset  
of  the  state  space  such  that  if  we  start  inside  we  stay  
there  for  ever  and
–  Most  trajectories  go  around  for  ever,
–  Without  ever  meeting  themselves  (not  limit  cycles)
•  Given  any  two  points  in  this  set  we  can  find  a  
trajectory  that  starts  arbitrarily  close  to  one  and  ends  
up  arbitrarily  close  to  the  other
•  This  set  is  called  a  chaotic  or  strange  aeractor

Exercise:  Compute   Exercise:    Simulate  the  Lorenz  


the  equilibria  of  the   equations  for  a=10,  b=24,  c=2,  
Lorenz  equations   and  x0=(-­‐‑5,  -­‐‑6,  20)  

7.12
Lorenz  aGractor  simulation

7.13
Stability
•  Most  commonly  studied  property  of  invariant  sets
•  Trajectories  stay  close  or  converge  to  invariant  set
•  Restrict  aeention  to  equilibria
•  Simple  characterization  for  LTI  and  equilibrium x̂ = 0
–  Systems  stable  if  eigenvalues  of  A  have  negative  real  part
–  Poles  of  transfer  function  are  in  left  half  of  complex  plane

Definition: An equilibrium x̂ is called


stable if for all ε > 0 there exists δ > 0 Exercise:  Which  of  the  
such that equilibria  of  the  pendulum  
(simulation  p.  7.6)  would  you  
x0 − x̂ < δ ⇒ x(t) − x̂ < ε ∀t ≥ 0
say  are  stable  and  which  not?
Otherwise equilibrium called unstable.

7.14
Asymptotic  stability
•  Stability  says  that  if  we  start  close  we  stay  close
•  Do  we  get  closer  and  closer?
Definition: An equilibrium x̂ is called locally asymptotically stable
if it is stable and there exists M > 0 such that
x0 − x̂ < M ⇒ lim x(t) = x̂
t→∞
It is called globally asymptotically stable if this holds for any M >0.
The set of x0 such that lim x(t) = x̂ is called the domain of attraction
t→∞
of x̂
Exercise:  What  is  the   Exercise:  Is  there  a  
domain  of  aeraction  of  a   difference  between  local  
globally  asymptotically   and  global  asymptotic  
stable  equilibrium? stability  for  linear  systems?

7.15
Example:  Pendulum  with  d>0

Exercise:  Which  of  the  


equilibria  would  you  
say  are  locally  
asymptotically  stable?  
Which  globally?

7.16
Linearization
•  Simple  way  to  study  stability  of  equilibrium  of  
nonlinear  system  is  to  approximate  by  linear  system
 = f (x(t)), f ( x̂) = 0
x(t)
x̂ equilibrium
•  Take  Taylor  expansion  about x̂
f (x) = f ( x̂) + A(x − x̂) + higher order terms in (x − x̂)
= A(x − x̂) + higher order terms in (x − x̂)
⎡ ∂f1 ∂f1 ⎤
⎡ x ⎤ ⎡ ⎤ ⎢ ( x̂)  ( x̂) ⎥
f1 (x1 ,…, xn ) ⎢ ∂x1 ∂xn ⎥
⎢ 1 ⎥ ⎢ ⎥ ⎢ ⎥
x = ⎢  ⎥ , f (x) = ⎢  ⎥, A = ⎢    ⎥ ∈R n×n

⎢ x ⎥ ⎢ f n (x1 ,…, xn ) ⎥ ⎢ ∂f n ∂f n ⎥
⎢⎣ n ⎥⎦ ⎢⎣ ⎥⎦ ⎢ ( x̂)  ( x̂) ⎥
⎢⎣ ∂x1 ∂xn ⎥⎦
7.17
Linearization
δ x(t) = x(t) − x̂ ∈R n
•  Consider  distance  of  x  to  equilibrium
•  When  x  close  to  equilibrium,  δx is small and

dδ x(t)
≈ Aδ x(t)
dt
•  So  close  to  equilibrium  nonlinear  system  expected  to  
behave  like  a  linear  system
•  In  particular,  stability  of  the  linearization  should  tell  
us  something  about  stability  of  nonlinear  system
•  Stability  of  linearization  can  be  determined  just  by  
looking  at  the  eigenvalues  of  A

7.18
Stability  by  linearization

Theorem 7.1: The equilibrium x̂ is


1.  Locally asymptotically stable if the eigenvalues of
the linearization have negative real part
2.  Unstable if the linearization has at least one
eigenvalue with positive real part

•  Called  Lyapunov  first  or  Lyapunov  indirect  method


•  Advantage:  Very  easy  to  use
•  Disadvantages:
–  No  information  about  the  domain  of  aeraction
–  Inconclusive  if  linearization  has  imaginary/zero  eigenvalues

7.19
Pendulum  example,  d>0
•  Linearization  about x̂ = (0,0)
⎡ 0 1 ⎤
dδ x(t) ⎢ ⎥ d g
=⎢ g d ⎥ δ x(t) ⇒ λ + λ + = 0
2

dt − − m l
⎢⎣ l m ⎥⎦
•  Eigenvalues  have  negative  real  part,  hence  
equilibrium  locally  asymptotically  stable
•  Linearization  about x̂ = (π ,0)
⎡ 0 1 ⎤
dδ x(t) ⎢ ⎥ d g
=⎢ g d ⎥ δ x(t) ⇒ λ + λ − = 0
2

dt − m l
⎢⎣ l m ⎥⎦
•  At  least  one  eigenvalue  has  positive  real  part,  hence  
equilibrium  unstable    
7.20
Linearization  can  be  inconclusive
•  Notice  that  if  d=0
–  x̂ = (π ,0)
Linearization  about                                        has  positive  eigenvalue
–  x̂ = (π ,0)
Hence                                        is  unstable  for  nonlinear  system
–  x̂ = (0,0)
Linearization  about                                        has  imaginary  eigenvalues
–  x̂ = (0,0)
Stability  of                                          not  determined  from  Theorem  7.1
•  It  turns  out  that  equilibrium  is  stable  (see  fig.  on  p.7.6)
•  This  is  not  always  the  case
•  For  example,  the  linearization  of  both

x(t) = x(t)3 and x(t) = −x(t)3


x̂ = 0
about                      has  one  eigenvalue  at  zero
•  But  0  stable  for  one  system  and  unstable  for  the  other

7.21
Lyapunov  functions
•  In  linear  systems  stability  characterized  in  two  ways
–  Eigenvalues  of  matrix  A  (Theorems  3.1,  3.2),  or  poles  of  the  
transfer  function  (p.5.19)
–  Existence  of  decreasing  energy-­‐‑like  function  (Theorem  4.1)
•  First  applies  to  nonlinear  systems,  how  about  second?
•  Properties  of  energy-­‐‑like  function  for  linear  systems
1 T
1.  Quadratic  function  of  the  state
V (x) = x Qx
2
2.  Q  positive  definite  à  V(x)>0  for  all x ≠ 0,V (0) = 0
d 1 T
3.  Power  also  quadratic  of  the  state V (x) = − x Rx
dt 2
4.  R = −( A Q + QA)
T
x≠0
                                                             positive  definite  à  V(x)  decreases  for  all

•  For  nonlinear  systems  keep  2  and  4,  but  allow  more  


general  (non-­‐‑quadratic)  V(x)

7.22
Lyapunov  functions:  Stability
Theorem 7.2: Assume there exists an open set S ⊆ R n
with x̂ ∈ S and a differentiable function V (i) : R n → R
1.  V ( x̂) = 0
2.  V (x) > 0, ∀x ∈S with x ≠ x̂
d
3.  V (x(t)) ≤ 0, ∀x ∈S
dt
Then the equilibrium x̂ is stable
•  Called  Lyapunov  second  or  Lyapunov  direct  method
•  Function  V(x)  known  as  Lyapunov  function
•  Derivative  along  trajectories  known  as  Lie  derivative
d n
∂V d n
∂V
V (x(t)) = ∑ (x(t)) xi (t) = ∑ (x(t)) f i (x(t)) = ∇V (x(t)) f (x(t))
dt i=1 ∂xi dt i=1 ∂xi

7.23
“Proof”:    By  picture!
{
Sc = x ∈S |V (x) ≤ c }


S δ

7.24
Example:  Pendulum  for  d=0
•  Recall  that  linearization  could  not  determine  the  
xˆ = (0, 0)
stability  of                                when  d=0
•  Consider  the  energy

1
( ) ( )
2

V (x) = m lθ + mgl 1− cos(θ )
2
1 θ l
(
= ml 2 x22 + mgl 1− cos(x1 )
2
)
m
mg

7.25
Example:  Pendulum  for  d=0
V (x)
3

2.5

1.5

0.5

Take                                                    check  
S = (−π , π ) × R 0
4

theorem  conditions
2 1
0.5
0

−2
x1 0

x2
1.  V   (0) = 0
−0.5
−4 −1

2.  V   (x) > 0 ∀x ≠ 0


d
3.    V (x(t)) = ml 2 x2 (t) x2 (t) + mgl sin(x1 (t)) x1 (t) = 0
dt
Hence  the  equilibrium  is  stable

7.26
Lyapunov  functions:  Asymptotic  stability
Theorem 7.3: Assume there exists an open set S ⊆ R n
with x̂ ∈S and a differentiable function V (i) : R n → R
1.  V ( x̂) = 0
2.  V (x) > 0, ∀x ∈S with x ≠ x̂
d
3.  V (x(t)) < 0, ∀x ∈S with x ≠ x̂
dt
Then the equilibrium x̂ is locally asymptotically stable.
If S = R n then it is globally asymptotically stable.
•  Lyapunov  functions  can  help  estimate  domain  of  
aeraction.  If  we  can  find  c  >  0  such  that
{
x ∈R n |V (x) ≤ c ⊆ S}
then  trajectories  that  start  in  this  set  stay  in  it  and  
converge  to  

7.27
Examples
•  Consider  first
 = f (x(t)) = −x(t)3 where x̂ = 0
x(t)
•  Let S = R, V (x) = x 2

•  Clearly V (0) = 0,V (x) > 0 ∀x ≠ 0, V (x) f (x) = −2x 4 < 0 ∀x ≠ 0
∂x
•  Therefore  0  is  globally  asymptotically  stable
•  How  about  pendulum  with  d  >  0
•  As  before  consider                                                      and  V(x)  the  energy
S = (−π , π ) × R
d
V (x(t)) = ml 2 x2 (t) x2 (t) + mgl sin(x1 (t)) x1 (t) = −dl 2 x2 (t) 2 ≤ 0
dt
x̂ = (0,0)
•  But  =0  whenever  x2(t)=0  (not  only  at                                ),  
therefore  cannot  conclude  local  asymptotic  stability  

7.28
La  Salle’s  Theorem
Theorem 7.4: Assume there exists a compact invariant
set S ⊆ R n and a differentiable function V (i) : R n → R
such that
∇V (x) f (x) ≤ 0 ∀x ∈S
Let M be the largest invariant set contained in the set
{
S = x ∈S ∇V (x) f (x) = 0 ⊆ R n }
Then all trajectories starting in S tend to M as t → ∞

•  “Compact”  means  bounded  and  closed


•  If            only  invariant  set  in                                                  


{
x ∈S ∇V (x) f (x) = 0 }
then  all  trajectories  starting  in  S  tend  to  it  

7.29
Energy when pendulum
Pendulum  with  d  >  0 stopped upside down

{
•  Take  V(x)  the  energy  and S = x ∈R 2 |V (x) ≤ 2mgl − ε }
for  any  ε >  0 3

Exercise:  Show  
2

that  S  is  invariant 1

x2 0

−1

−2

−3

•  Recall  that
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

x1
⎧⎪ ≤ 0 ∀x ∈S
∇V (x) f (x) = −dl x ⎨
2 2

⎪⎩ = 0 when x2 = 0
2

7.30
Pendulum  with  d  >  0
•  Therefore              
{
S = x ∈S | x2 = 0 }
•                                 is  the  only  invariant  set  contained  in
x̂ = (0,0) S,
since x2 ≠ 0 if x2 = 0 but x1 ≠ 0
xˆ = (0, 0)
•  Therefore  all  trajectories  that  start  in  S  tend  to  
xˆ = (0, 0)
•  By  Theorem  7.2,                                    is  stable
•  Hence,  by  Theorem  7.4,  locally  asymptotically  stable
•  Moreover,  since  ε is  arbitrary,  the  domain  of  
aeraction  of  (0,0)  contains  everything  except  the  
other  equilibrium   (π , 0)

7.31
General  comments
•  Theorem  7.4  applies  to  more  general  invariant  sets  
(e.g.  limit  cycles)
•  Theorems  7.2  and  7.3  also  generalize  easily
•  Theorem  7.1  slightly  harder  to  generalize  
(linearization  about  trajectories,  Poincare  maps)
•  Conditions  of  Theorems  7.2-­‐‑7.4  sufficient  and  not  
necessary
•  Finding  Lyapunov  functions  for  nonlinear  systems  an  
art  not  a  science.  Common  choices
–  Energy  for  mechanical  and  electrical  systems
–  Quadratics  (always  work  for  linear  systems)
–  Intuition!

7.32

Potrebbero piacerti anche