Sei sulla pagina 1di 67

Lecture Notes: Week 1a

ECE/MAE 7360
Optimal and Robust Control
(Fall 2003 Offering)

Instructor: Dr YangQuan Chen, CSOIS, ECE Dept.,


Tel. : (435)797-0148.
E-mail: yqchen@ieee.org or, yqchen@ece.usu.edu
Control Systems Area
Fall'03 Course Offering

ECE/MAE 7360 Optimal and Robust Control. Advanced methods of control system
analysis and design. Operator approaches to optimal control, including LQR/LQG/LTR,
mu-analysis, H-infinity loop shaping and gap metric etc. Prerequisite: ECE 6320 or
instructor approval. (3 cr) (alternate Fall).

Day/Time/Venue: MW 2:30-3:45 PM. EL-112 (Control Lab)

Instructor: Dr YangQuan Chen, CSOIS, ECE Dept., (435)797-0148.

Text: Kemin Zhou, with John Doyle, Essentials of Robust Control, Prentice-Hall, 1998.

Course Description: Robust control is concerned with the problem of designing control
systems when there is uncertainty about the model of the system to be controlled or when
there are (possibly uncertain) external disturbances influencing the behavior of the
system. Optimal control is concerned with the design of control systems to achieve a
prescribed performance (e.g., to find a controller for a given linear system that minimizes
a quadratic cost function). While optimal control theory was originally derived using the
techniques of calculus of variation, most robust control methodologies have been
developed from an operator-theoretic perspective. In this course we will mainly use an
operator approach to study the basic results in robust control that have been developed
over the last fifteen years. However, mathematical programming based techniques for
solving optimal control problems will also be briefly covered. This course provides a
unified treatment of multivariable control system design for systems subject to
uncertainty and performance requirements.

Course Topics:

1. Review of multivariable linear control theory and balanced model


realization/reduction.
2. Signal/system norms and H ∞ / H 2 spaces and internal stability.
3. Performance specification and limitations.
4. Modeling uncertainty and robustness.
5. LFT and mu synthesis.
6. Parameterization of controllers.
7. H 2 -optimal control (LQR/Kalman Filter /LQG/LTR.)
8. H ∞ -optimal control (for unstructured perturbations).
9. Gap metric
10. Solving optimal control problems numerically.
ECE/MAE 7360: Optimal and Robust Control
Course Syllabus - Fall 2003

From http://www.ece.usu.edu/academics/graduate_courses.html
***7360. Optimal and Robust Control.
Advanced methods of control system analysis and design. Operator approaches to optimal
control, including LQR, LQG, and L1 optimization techniques. Robust control theory, including
QFT, H-infinity, and interval polynomial approaches. Prerequisite: ECE/MAE 6320 or
instructor approval. Also taught as MAE 7360. (3 cr) (Sp)

Instructor: YangQuan Chen, Center for Self-Organizing and Intelligent Systems


Department of Electrical and Computer Engineering, Utah State University
Room EL152; Tel.(435)797-0148, yqchen@ece.usu.edu

Lecture Day/Time/Venue: MW 2:30-3:45 PM. EL-112 (Control Lab)


Ofice Hours: MW 1:15-2:30 PM.

Text: Kemin Zhou, with John Doyle, Essentials of Robust Control, Prentice-Hall, 1998.
References: Will be give by the Instructor via email/web/ftp.

Software: (1) MATLAB Control Systems Toolbox (2) MATLAB mu-Synthesis Toolbox (3)
RIOTS_95: MATLAB Toolbox for solving general optimal control problems.

Course Requirements:
Homework 40 points
Mid-term take home exam 10 points
Focused Individual Study Project/presentation 10 points
Design project 40 points
Notes:

1. The course will follow the outline on the next page.


2. The course will cover material from most chapters of the text as well as some materials taken
from the instructor's notes.
3. The course will be conducted as follows:

a) There will be lectures by the instructor on most Mondays/Wednesdays


b) Homework or project assignments will be given, via e-mail, on the weekly basis
normally on Wednesday. The due is by the end of the next Wednesday.
c) There will be a midterm take-home exam.
d) For each student, a focused individual study project (FISP) is to be done with a
literature survey and a class presentation. Topics can be chosen by the individual
student, subject to the approval of the Instructor.
e) There are totally 4 design projects using MATLAB Simulink/Control Systems
Toolbox/mu-Synthesis Toolbox/RIOTS_95 Toolbox. The Instructor will provide a
free student edition of RIOTS_95 (worth $99.00) for solving general optimal control
problems.
f) There is no final exam.
Course Description:

Robust control is concerned with the problem of designing control systems when there is
uncertainty about the model of the system to be controlled or when there are (possibly uncertain)
external disturbances influencing the behavior of the system. Optimal control is concerned with
the design of control systems to achieve a prescribed performance (e.g., to find a controller for a
given linear system that minimizes a quadratic cost function). While optimal control theory was
originally derived using the techniques of calculus of variation, most robust control
methodologies have been developed from an operator-theoretic perspective. In this course we will
mainly use an operator approach to study the basic results in robust control that have been
developed over the last fifteen years. However, mathematical programming based techniques for
solving optimal control problems will also be briefly covered. This course provides a unified
treatment of multivariable control system design for systems subject to uncertainty and
performance requirements.

Course Topics and Approximate Schedule:

Course Topics:

1. Review of multivariable linear control theory and balanced model realization/reduction.


2. Signal/system norms and / spaces and internal stability.
3. Performance specification and limitations.
4. Modeling uncertainty and robustness.
5. LFT and mu synthesis.
6. Parameterization of controllers.
7. -optimal control (LQR/Kalman Filter /LQG/LTR.)
8. -optimal control (for unstructured perturbations).
9. Gap metric
10. Solving optimal control problems numerically.

wk Mondays Wednesdays Homework/Project


#
1 Aug. 25 – Chapter 1 Aug. 27 – Chapter 2,3 HW#1
Introduction/linear algebra Review /linear system theory
2 Sept. 1 – No class Sept. 3 – No class
Labor Day (ASME DETC03 Conference)
3 Sept. 8 -- Chapter 4, 5 Sept. 10 – Chapter 6 HW#2
Norms, Stability Performance Specs/Limitation
4 Sept. 15 – Chapter 6 Sept. 17 – Chapter 7 Proj.#1: Inverted
More on performance Balanced Model Reduction Pendulum control
limitations. revisited
5 Sept. 22 -- Chapter 8 Sept. 24 – Chapter 9 HW#3
Modeling Uncertainty LFT: Linear Fractional
Transform
6 Sept. 29 – Chapter 10 Oct. 1 – Chapter 10 HW#4
mu and mu synthesis More on mu
7 Oct. 6 – Chapter 11 Oct. 8 – Chapter 12,13 Project#2: Space-shuttle
Controller parameterization LQR/H2 control robustness analysis
(Youla-paramterization) (stability and
performance)
8 Oct. 13 – Lecturer's Notes Oct. 15 – Chapter 14 HW#5
LQG/LTR H-infinity Control

9 Oct. 20 -- Chapter 14 Oct. 22 – Chapter 14 mid-term take home


H-infinity Control H-infinity Control exam
10 Oct. 27 -- Chapter 15 Oct. 29 – Chapter 16 HW#6
H-infinity Controller order- H-infinity loop shaping
reduction
11 Nov. 3 – Chapter 16 Nov. 5 – – Chapter 16 Project#3: H-infinity
H-infinity loop shaping H-infinity loop shaping control (performance)
design of high-
maneuvering airplane
12 Nov. 10 – Chapter 17 Nov. 12 – Chapter 17 HW#7
Gap metric nu-Gap metric
13 Nov. 17 – Instructor's notes Nov. 19 – Instructor's notes HW#8
Mathematical foundation of Sample applications of
RIOTS_95 RIOTS_95
14 Nov. 24 – FISP presentations Nov. 26 – No class. Project #4: Solving
(3 students) Thanksgiving optimal control problems
(you define your own
OCP!) using RIOTS_95
15 Dec. 1 – FISP presentations (2 Dec. 3 - FISP
students) presentations (2 students)
16 Dec. 8 - No class Dec. 10 – No class No Final Exam
(IEEE CDC'03 Conference) (IEEE CDC'03 Conference) Everything due on Dec.
Email exit interview Due: Dec. 15. 12, 12:00PM.

Possible Topics for FISP (not limited to the following, students may propose their own topic
of interest subject to the Instructor’s approval)
1. l1 - and l∞ -optimal control (for rejection of unknown but bounded disturbances)
2. Structured perturbations, Kharitonov's Theorem
3. Quantitative feedback theory (QFT ).
4. Linear matrix inequalities (LMI ).
5. and many more …
3

Classical control in the 1930’s and 1940’s


Bode, Nyquist, Nichols, . . .

• Feedback amplifier design


• Single input, single output (SISO)
• Frequency domain
• Graphical techniques
• Emphasized design tradeoffs
– Effects of uncertainty
– Nonminimum phase systems
– Performance vs. robustness

Problems with classical control

Overwhelmed by complex systems:


• Highly coupled multiple input, multiple output systems
• Nonlinear systems
• Time-domain performance specifications
4

The origins of modern control theory

Early years

• Wiener (1930’s - 1950’s) Generalized harmonic analysis, cybernetics,


filtering, prediction, smoothing
• Kolmogorov (1940’s) Stochastic processes
• Linear and nonlinear programming (1940’s - )

Optimal control
• Bellman’s Dynamic Programming (1950’s)
• Pontryagin’s Maximum Principle (1950’s)
• Linear optimal control (late 1950’s and 1960’s)
– Kalman Filtering
– Linear-Quadratic (LQ) regulator problem
– Stochastic optimal control (LQG)
5

The diversification of modern control


in the 1960’s and 1970’s

• Applications of Maximum Principle and Optimization


– Zoom maneuver for time-to-climb
– Spacecraft guidance (e.g. Apollo)
– Scheduling, resource management, etc.
• Linear optimal control
• Linear systems theory
– Controllability, observability, realization theory
– Geometric theory, disturbance decoupling
– Pole assignment
– Algebraic systems theory
• Nonlinear extensions
– Nonlinear stability theory, small gain, Lyapunov
– Geometric theory
– Nonlinear filtering
• Extension of LQ theory to infinite-dimensional systems
• Adaptive control
6

Modern control application: Shuttle reentry

The problem is to control the reentry of the shuttle, from orbit to


landing. The modern control approach is to break the problem into two
pieces:
• Trajectory optimization
• Flight control

• Trajectory optimization: tremendous use of modern control principles


– State estimation (filtering) for navigation
– Bang-bang control of thrusters
– Digital autopilot
– Nonlinear optimal trajectory selection
• Flight control: primarily used classical methods with lots of simulation
– Gain scheduled linear designs
– Uncertainty studied with ad-hoc methods

Modern control has had little impact on feedback design because it


neglects fundamental feedback tradeoffs and the role of plant uncertainty.
7

The 1970’s and the return of the frequency domain

Motivated by the inadequacies of modern control, many researchers


returned to the frequency domain for methods for MIMO feedback control.

• British school
– Inverse Nyquist Array
– Characteristic Loci
• Singular values
– MIMO generalization of Bode gain plots
– MIMO generalization of Bode design
– Crude MIMO representations of uncertainty
• Multivariable loopshaping and LQG/LTR
– Attempt to reconcile modern and classical methods
– Popular, but hopelessly flawed
– Too crude a representation of uncertainty

While these methods allowed modern and classical methods to be blended


to handle many MIMO design problems, it became clear that fundamen-
tally new methods needed to be developed to handle complex, uncertain,
interconnected MIMO systems.
8

Postmodern Control

• Mostly for fun. Sick of “modern control,” but wanted a name equally
pretentious and self-absorbed.
• Other possible names are inadequate:
– Robust ( too narrow, sounds too macho)
– Neoclassical (boring, sounds vaguely fascist )
– Cyberpunk ( too nihilistic )
• Analogy with postmodern movement in art, architecture, literature,
social criticism, philosophy of science, feminism, etc. ( talk about
pretentious ).

The tenets of postmodern control theory

• Theories don’t design control systems, engineers do.


• The application of any methodology to real problems will require some
leap of faith on the part of the engineer (and some ad hoc fixes).
• The goal of the theoretician should be to make this leap smaller and
the ad hoc fixes less dominant.
9

Issues in postmodern control theory

• More connection with data


• Modeling
– Flexible signal representation and performance objectives
– Flexible uncertainty representations
– Nonlinear nominal models
– Uncertainty modeling in specific domains
• Analysis
• System Identification
– Nonprobabilistic theory
– System ID with plant uncertainty
– Resolving ambiguity; “uncertainty about uncertainty”
– Attributing residuals to perturbations, not just noise
– Interaction with modeling and system design
• Optimal control and filtering
– H∞ optimal control
– More general optimal control with mixed norms
– Robust performance for complex systems with structured uncer-
tainty
10

Chapter 2: Linear Algebra

• linear subspaces
• eigenvalues and eigenvectors
• matrix inversion formulas
• invariant subspaces
• vector norms and matrix norms
• singular value decomposition
• generalized inverses
• semidefinite matrices
11

Linear Subspaces

• linear combination:
α1 x1 + . . . + αk xk , xi ∈ Fn, α ∈ F
span{x1, x2, . . . , xk } := {x = α1 x1 + . . . + αk xk : αi ∈ F}.
• x1, x2, . . . , xk ∈ Fn linearly dependent if there exists α1, . . . , αk ∈ F
not all zero such that α1 x2 + . . . + αk xk = 0; otherwise they are
linearly independent.
• {x1, x2, . . . , xk } ∈ S is a basis for S if x1, x2, . . . , xk are linearly
independent and S = span{x1, x2, . . . , xk }.
• {x1, x2, . . . , xk } in Fn are mutually orthogonal if x∗i xj = 0 for all
i 6= j and orthonormal if x∗i xj = δij .
• orthogonal complement of a subspace S ⊂ Fn :
S ⊥ := {y ∈ Fn : y∗x = 0 for all x ∈ S}.

• linear transformation
A: F
n
7−→ Fm.
• kernel or null space
KerA = N(A) := {x ∈ Fn : Ax = 0},
and the image or range of A is
ImA = R(A) := {y ∈ Fm : y = Ax, x ∈ Fn}.
Let ai, i = 1, 2, . . . , n denote the columns of a matrix A ∈ F
m×n
,
then
ImA = span{a1, a2, . . . , an}.
12

• The rank of a matrix A is defined by


rank(A) = dim(ImA).
rank(A) = rank(A∗). A ∈ Fm×n is full row rank if m ≤ n and
rank(A) = m. A is full column rank if n ≤ m and rank(A) = n.
• unitary matrix U ∗ U = I = UU ∗ .
• Let D ∈ Fn×k (n > k) be such that

D∗D = I. Then there exists a
matrix D⊥ ∈ Fn×(n−k) such that D D⊥ is a unitary matrix.
• Sylvester equation
AX + XB = C
with A ∈ Fn×n , B ∈ Fm×m , and C ∈ Fn×m has a unique solution
X ∈ Fn×m if and only if λi(A) + λj (B) 6= 0, ∀i = 1, 2, . . . , n and
j = 1, 2, . . . , m.
“Lyapunov Equation”: B = A∗.
• Let A ∈ Fm×n and B ∈ Fn×k . Then
rank (A) + rank(B) − n ≤ rank(AB) ≤ min{rank (A), rank(B)}.

• the trace of A = [aij ] ∈ Cn×n


n
X
Trace(A) := aii.
i=1

Trace has the following properties:


Trace(αA) = α Trace(A), ∀α ∈ C, A ∈ Cn×n

Trace(A + B) = Trace(A) + Trace(B), ∀A, B ∈ Cn×n

Trace(AB) = Trace(BA), ∀A ∈ Cn×m , B ∈ Cm×n .


13

Eigenvalues and Eigenvectors

• The eigenvalues and eigenvectors of A ∈ Cn×n : λ, x ∈ Cn


Ax = λx
x is a right eigenvector
y is a left eigenvector:
y ∗A = λy ∗.
• eigenvalues: the roots of det(λI − A).
• the spectral radius: ρ(A) := max1≤i≤n |λi|
• Jordan canonical form: A ∈ Cn×n , ∃ T
A = T JT −1
where
J = diag{J1, J2, . . . , Jl }
Ji = diag{Ji1, Ji2, . . . , Jimi }
 
λi 1
 
 
 


λi 1 

 . . 
Jij =  . . . .  ∈ Cnij ×nij
 
 


 λi 1 
 
λi
The transformation T has the following form:
 
T = T1 T2 . . . Tl
 
Ti = Ti1 Ti2 . . . Timi
 
Tij = tij1 tij2 . . . tijnij
14

where tij1 are the eigenvectors of A,


Atij1 = λitij1,
and tijk 6= 0 defined by the following linear equations for k ≥ 2
(A − λi I)tijk = tij(k−1)
are called the generalized eigenvectors of A.
A ∈ Rn×n with distinct eigenvalues can be diagonalized:
 
 λ1 
 
    
 λ2 
A x1 x2 · · · xn = x1 x2 · · · xn 

 ... 


.
 
 
λn
and has the following spectral decomposition:
n
X
A= λixi yi∗
i=1

where yi ∈ Cn is given by
 

 y1 
 
 ∗   −1
 y2 
1 x2 · · · xn
  = x .
 . 
 .. 
 
 

yn

• A ∈ Rn×n with real eigenvalue λ ∈ R ⇒ real eigenvector x ∈ Rn.


• A is Hermitian, i.e., A = A∗ ⇒ ∃ unitary U such that A = UΛU ∗
and Λ = diag{λ1, λ2, . . . , λn} is real.
15

Matrix Inversion Formulas

     
 A11 A12   I 0  A11 0   I A−1
11 A12 
•   =    
A21 A22 A21A−1
11 I 0 ∆ 0 I
∆ := A22 − A21A−1
11 A12
     
−1 ˆ 0
 A11 A12   I A12 A22  ∆  I 0 
•  =    
A21 A22 0 I 0 A22 A−1
22 A21 I
ˆ := A11 − A12A−1A21
∆ 22
 −1  
−1 −1 −1
A A  A11 + A11 A12 ∆ A21A−1 −A−1 A12∆−1
•  11 12 
 = 11 11 

A21 A22 −∆−1A21A−1 11 ∆−1
and
 −1  
 A11 A12   ∆ˆ −1 −∆ˆ −1A12A−1
22 
  = −1 .

A21 A22 −A−1 A
22 21
ˆ
∆ −1
A−1
22 + A−1
A
22 21
ˆ
∆ −1
A A
12 22

 −1  
 A11 0   A−1
11 0 
  = 
A21 A22 −A−1 −1 −1
22 A21 A11 A22
 −1  
−1
 A11 A12   A11 −A−1 −1
11 A12 A22 
  =  .
0 A22 0 A−1
22

• det A = det A11 det(A22−A21A−1 −1


11 A12 ) = det A22 det(A11−A12 A22 A21 ).
In particular, for any B ∈ Cm×n and C ∈ Cn×m , we have
 
Im B 
det   = det(In + CB) = det(Im + BC)
−C In
and for x, y ∈ Cn det(In + xy ∗) = 1 + y∗x.
• matrix inversion lemma:

(A11 − A12A−1
22 A21 )
−1
= A−1 −1 −1 −1 −1
11 + A11 A12 (A22 − A21A11 A12 ) A21 A11 .
16

Invariant Subspaces

• a subspace S ⊂ C
n
is an A-invariant subspace if Ax ∈ S for every
x ∈ S.
For example, {0}, Cn , and KerA are all A-invariant subspaces.
Let λ and x be an eigenvalue and a corresponding eigenvector of
A ∈ Cn×n . Then S := span{x} is an A-invariant subspace since
Ax = λx ∈ S.
In general, let λ1, . . . , λk (not necessarily distinct) and xi be a set of
eigenvalues and a set of corresponding eigenvectors and the generalized
eigenvectors. Then S = span{x1, . . . , xk } is an A-invariant subspace
provided that all the lower rank generalized eigenvectors are included.
• An A-invariant subspace S ⊂ Cn is called a stable invariant subspace
if all the eigenvalues of A constrained to S have negative real parts.
Stable invariant subspaces are used to compute the stabilizing solu-
tions of the algebraic Riccati equations
• Example
 
 λ1 1 
 
    


λ1 

A x1 x2 x3 x4 = x1 x2 x3 x4 



 λ3 
 
λ4
with Reλ1 < 0, λ3 < 0, and λ4 > 0. Then it is easy to verify that
S1 = span{x1} S12 = span{x1, x2} S123 = span{x1, x2, x3}
S3 = span{x3} S13 = span{x1, x3} S124 = span{x1, x2, x4}
S4 = span{x4} S14 = span{x1, x4} S34 = span{x3, x4}
are all A-invariant subspaces. Moreover, S1, S3, S12, S13 , and S123 are
stable A-invariant subspaces.
17

However, the subspaces


S2 = span{x2}, S23 = span{x2, x3}
S24 = span{x2, x4}, S234 = span{x2, x3, x4}
are not A-invariant subspaces since the lower rank generalized eigen-
vector x1 of x2 is not in these subspaces.
To illustrate, consider the subspace S23. It is an A-invariant subspace
if Ax2 ∈ S23. Since
Ax2 = λx2 + x1,
Ax2 ∈ S23 would require that x1 be a linear combination of x2 and
x3, but this is impossible since x1 is independent of x2 and x3.
18

Vector Norms and Matrix Norms

X a vector space. k·k is a norm if


(i) kxk ≥ 0 (positivity);
(ii) kxk = 0 if and only if x = 0 (positive definiteness);
(iii) kαxk = |α| kxk, for any scalar α (homogeneity);
(iv) kx + yk ≤ kxk + kyk (triangle inequality)
for any x ∈ X and y ∈ X.
Let x ∈ Cn. Then we define the vector p-norm of x as
 1/p
n
X
kxkp :=  |xi|p , for 1 ≤ p ≤ ∞.
i=1

In particular, when p = 1, 2, ∞ we have


n
X
kxk1 := |xi|;
i=1
v
u
uXn
u
kxk2 := t |xi|2;
i=1
kxk∞ := max |xi|.
1≤i≤n
the matrix norm induced by a vector p-norm is defined as
kAxkp
kAkp := sup .
x6=0 kxkp

In particular, for p = 1, 2, ∞, the corresponding induced matrix norm can


be computed as
m
X
kAk1 = max |aij | (column sum) ;
1≤j≤n i=1
r
kAk2 = λmax(A∗A) ;
19

n
X
kAk∞ = max |aij | (row sum) .
1≤i≤m j=1

The Euclidean 2-norm has some very nice properties:


Let x ∈ Fn and y ∈ Fm.
1. Suppose n ≥ m. Then kxk = kyk iff there is a matrix U ∈ F
n×m

such that x = Uy and U ∗U = I.


2. Suppose n = m. Then |x∗y| ≤ kxk kyk. Moreover, the equality
holds iff x = αy for some α ∈ F or y = 0.
3. kxk ≤ kyk iff there is a matrix ∆ ∈ Fn×m with k∆k ≤ 1 such that
x = ∆y. Furthermore, kxk < kyk iff k∆k < 1.
4. kUxk = kxk for any appropriately dimensioned unitary matrices U.
Frobenius norm
v
r u m n
uX X
kAkF := Trace(A∗A) = u
t |aij |2 .
i=1 j=1

Let A and B be any matrices with appropriate dimensions. Then


1. ρ(A) ≤ kAk (This is also true for F norm and any induced matrix
norm).

2. kABk ≤ kAk kBk. In particular, this gives A
≥ kAk−1 if A is
−1

invertible. (This is also true for any induced matrix norm.)


3. kUAV k = kAk, and kUAV kF = kAkF , for any appropriately di-
mensioned unitary matrices U and V .
4. kABkF ≤ kAk kBkF and kABkF ≤ kBk kAkF .
20

Singular Value Decomposition

Let A ∈ Fm×n . There exist unitary matrices


U = [u1, u2, . . . , um] ∈ Fm×m
V = [v1, v2, . . . , vn] ∈ Fn×n
such that  
Σ 0
A = UΣV ∗, Σ =  1 
0 0
where  
 σ1 0 ··· 0 
 

 0 σ2 ··· 0 

Σ1 =  .. ... ... ... 


 . 
 
0 0 ··· σp
and
σ1 ≥ σ2 ≥ · · · ≥ σp ≥ 0, p = min{m, n}.
Singular values are good measures of the “size” of a matrix
Singular vectors are good indications of strong/weak input or output
directions.
Note that
Avi = σiui
A∗ui = σivi .

A∗ Avi = σi2vi
AA∗ ui = σi2ui .

σ(A) = σmax (A) = σ1 = the largest singular value of A;


and
σ(A) = σmin (A) = σp = the smallest singular value of A .
21

Geometrically, the singular values of a matrix A are precisely the lengths


of the semi-axes of the hyper-ellipsoid E defined by
E = {y : y = Ax, x ∈ Cn , kxk = 1}.
Thus v1 is the direction in which kyk is the largest for all kxk = 1; while
vn is the direction in which kyk is the smallest for all kxk = 1.
v1 (vn ) is the highest (lowest) gain input direction
u1 (um) is the highest (lowest) gain observing direction
e.g.,
   
cos θ1 − sin θ1 σ1 cos θ2 − sin θ2 
A =  


 .
sin θ1 cos θ1 σ2 sin θ2 cos θ2
A maps a unit disk to an ellipsoid with semi-axes of σ1 and σ2.
alternative definitions:
σ(A) := max kAxk
kxk=1

and for the smallest singular value σ of a tall matrix:


σ(A) := min kAxk .
kxk=1

Suppose A and ∆ are square matrices. Then


(i) |σ(A + ∆) − σ(A)| ≤ σ(∆);
(ii) σ(A∆) ≥ σ(A)σ(∆);
1
(iii) σ(A−1) = if A is invertible.
σ(A)
22

Some useful properties


Let A ∈ Fm×n and
σ1 ≥ σ2 ≥ · · · ≥ σr > σr+1 = · · · = 0, r ≤ min{m, n}.
Then
1. rank(A) = r;
2. KerA = span{vr+1, . . . , vn} and (KerA)⊥ = span{v1, . . . , vr };
3. ImA = span{u1, . . . , ur } and (ImA)⊥ = span{ur+1, . . . , um};
4. A ∈ Fm×n has a dyadic expansion:
r
X
A= σiuivi∗ = Ur Σr Vr∗
i=1

where Ur = [u1, . . . , ur ], Vr = [v1, . . . , vr ], and Σr = diag (σ1, . . . , σr );


5. kAk2F = σ12 + σ22 + · · · + σr2;
6. kAk = σ1;
7. σi(U0AV0) = σi(A), i = 1, . . . , p for any appropriately dimensioned
unitary matrices U0 and V0;
Pk ∗
8. Let k < r = rank(A) and Ak := i=1 σi ui vi , then
min kA − Bk = kA − Ak k = σk+1.
rank(B)≤k
23

Generalized Inverses

Let A ∈ Cm×n . X ∈ Cn×m is a right inverse if AX = I. one of the


right inverses is given by X = A∗ (AA∗)−1.
Y A = I then Y is a left inverse of A.
pseudo-inverseor Moore-Penrose inverse A+:
(i) AA+ A = A;
(ii) A+AA+ = A+;
(iii) (AA+)∗ = AA+ ;
(iv) (A+A)∗ = A+A.
pseudo-inverse is unique.
A = BC
B has full column rank and C has full row rank. Then
A+ = C ∗(CC ∗)−1(B ∗B)−1B ∗.
or
A = UΣV ∗
with  
Σr 0 
Σ =   , Σr > 0.
0 0
Then A+ = V Σ+U ∗ with
 
−1
+  Σr 0
Σ = .
0 0
24

Semidefinite Matrices

• A = A∗ is positive definite (semi-definite) denoted by A > 0 (≥ 0),


if x∗Ax > 0 (≥ 0) for all x 6= 0.
• A ∈ Fn×n and A = A∗ ≥ 0, ∃ B ∈ Fn×r with r ≥ rank(A) such that
A = BB ∗.
• Let B ∈ Fm×n and C ∈ Fk×n . Suppose m ≥ k and B ∗B = C ∗C.
∃ U ∈ Fm×k such that U ∗ U = I and B = UC.
• square root for a positive semi-definite matrix A, A1/2 = (A1/2)∗ ≥ 0,
by
A = A1/2A1/2.
Clearly, A1/2 can be computed by using spectral decomposition or
SVD: let A = UΛU ∗, then
A1/2 = UΛ1/2U ∗
where
√ √
Λ = diag{λ1, . . . , λn}, Λ1/2
= diag{ λ1, . . . , λn}.
• A = A∗ > 0 and B = B ∗ ≥ 0. Then A > B iff ρ(BA−1 ) < 1.
• Let X = X ∗ ≥ 0 be partitioned as
 
X X12 
X =  11
∗ .
X12 X22
Then KerX22 ⊂ KerX12. Consequently, if X22
+
is the pseudo-inverse
+
of X22, then Y = X12X22 solves
Y X22 = X12
and
     
+ ∗
  X11 − X12 X22 X12
+
 X 11 X12   I X12 X22 0  I 0
  =    .
∗ + ∗
X12 X22 0 I 0 X22 X22 X12 I
Reference Textbooks

• G. F. Franklin, J. D. Powell and A. Emami-Naeini, Feedback Control of Dynamic


Systems, 3rd Edition, Addison Wesley, New York, 1994.

• B. D. O. Anderson and J. B. Moore, Optimal Control, Prentice Hall, London, 1989.

• F. L. Lewis, Applied Optimal Control and Estimation, Prentice Hall, Englewood Cliffs,
New Jersey, 1992.

• A. Saberi, B. M. Chen and P. Sannuti, Loop Transfer Recovery: Analysis and Design,
Springer, London, 1993.

• A. Saberi, P. Sannuti and B. M. Chen, H2 Optimal Control, Prentice Hall, London, 1995.

• B. M. Chen, Robust and H∞ Control, Springer, London, 2000.

3
Prepared by Ben M. Chen
Revision: Basic Concepts

6
Prepared by Ben M. Chen
What is a control system?

aircraft, missiles, Information


Desired INPUT
economic systems, about the
performance: Difference: to the
cars, etc system:
ERROR system
REFERENCE OUTPUT

Controller System to be controlled


+

Objective: To make the system OUTPUT and the desired REFERENCE as close
as possible, i.e., to make the ERROR as small as possible.

Key Issues: 1) How to describe the system to be controlled? (Modelling)

2) How to design the controller? (Control)


7
Prepared by Ben M. Chen
Some Control Systems Examples:

REFERENCE INPUT OUTPUT


Controller System to be controlled
+

Desired Government
Economic System
Performance Policies
8
Prepared by Ben M. Chen
A Live Demonstration on Control of a Coupled-Tank System through Internet Based
Virtual Laboratory Developed by NUS

The objective is to control the flow levels of two coupled tanks. It is a reduced-scale
model of some commonly used chemical plants.
9
Prepared by Ben M. Chen
Modelling of Some Physical Systems

A simple mechanical system:


&x& acceleration
A cruise-control
x displacement
system friction
force b x& mass force u
m

By the well-known Newton’s Law of motion: f = m a, where f is the total force applied to an
object with a mass m and a is the acceleration, we have
b u
u − bx& = m&x& ⇔ &x& + x& =
m m
This a 2nd order Ordinary Differential Equation with respect to displacement x. It can be
written as a 1st order ODE with respect to speed v = x& :
b u ← model of the cruise control system, u is input force, v is output.
v& + v= 10
m m
Prepared by Ben M. Chen
A cruise-control system:

REFERENCE INPUT OUTPUT


Controller
+

90 km/h u b u speed v
? v& + v =
+ m m

11
Prepared by Ben M. Chen
Basic electrical systems:

resistor capacitor inductor

i i (t) i (t)
di
v R v=iR v (t) C i=C
dv v (t) L v=L
dt
dt

Kirchhoff’s Voltage Law (KVL): Kirchhoff’s Current Law (KCL):

The sum of voltage drops around any The sum of currents entering/leaving a
close loop in a circuit is 0. note/closed surface is 0.
v2 i1 i1
i5 i5

v1 v3 i4 i4
i2 i2
i3 i3
v1 +v2 +v3 +v4 +v5 = 0
v5 v4 i1 + i2 + i3 + i4 + i5 = 0
12
Prepared by Ben M. Chen
Modelling of a simple electrical system:

To find out relationship between the input (vi) and the output (vo) for the circuit:
dvo
vR = Ri = RC
dt
i dvo
i=C
dt
R
vi C vo

By KVL, we have v o + v R − vi = 0

dvo
vo + v R − vi = vo + RC − vi = 0
dt

dvo A dynamic model


RC + v o = vi ⇔ RC v&o + vo = vi
dt of the circuit
13
Prepared by Ben M. Chen
Control the output voltage of the electrical system:

REFERENCE INPUT OUTPUT


vi R vo
Controller C
+

230 Volts vi vo
? RC v&o + vo = vi
+

14
Prepared by Ben M. Chen
Ordinary Differential Equations

Many real life problems can be modelled as an ODE of the following form:

&y&(t ) + a1 y& (t ) + a0 y (t ) = u (t )

This is called a 2nd order ODE as the highest order derivative in the equation is 2. The ODE
is said to be homogeneous if u(t) = 0. In fact, many systems can be modelled or
approximated as a 1st order ODE, i.e.,

y& (t ) + a0 y (t ) = u(t )

An ODE is also called the time-domain model of the system, because it can be seen the above
equations that y(t) and u(t) are functions of time t. The key issue associated with ODE is: how
to find its solution? That is: how to find an explicit expression for y(t) from the given equation?
15
Prepared by Ben M. Chen
State Space Representation

Recall that many real life problems can be modelled as an ODE of the following form:

&y&(t ) + a1 y& (t ) + a0 y (t ) = u (t )

If we define so-called state variables,

x1 = y x&1 = y& = x2
x2 = y& x&2 = &y& = − a1 y& − a0 y + u = −a1 x2 − a0 x1 + u

We can rewrite these equations in a more compact (matrix) form,

 x&1   0 1   x1  0  x1 
  =     +   u, y = x1 = [1 0]  
 x&2  − a0 − a1   x2  1  x2 

This is called the state space representation of the ODE or the dynamic systems.
16
Prepared by Ben M. Chen
Laplace Transform and Inverse Laplace Transform

Let us first examine the following time-domain functions:


1 2

0.8
1.5

0.6
1
0.4

0.5
0.2
Magnitude

Magnitude
0 0

-0.2
-0.5

-0.4
-1
-0.6
-1.5
-0.8

-1 -2
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
TIME (Second) TIME (Second)

A cosine function with a frequency f = 0.2 Hz. x (t ) = cos(0.4πt ) + sin(0.8πt ) cos(1.6πt )


Note that it has a period T = 5 seconds. What are frequencies of this function?

Laplace transform is a tool to convert a time-domain function into a frequency-domain one


in which information about frequencies of the function can be captured. It is often much
easier to solve problems in frequency-domain with the help of Laplace transform.
17
Prepared by Ben M. Chen
Laplace Transform:

Given a time-domain function f (t), its Laplace transform is defined as follows:



F ( s ) = L{ f (t )} = ∫ f (t )e − st dt
0

Example 1: Find the Laplace transform of a constant function f (t) = 1.

∞ ∞ ∞
1 1  1  1  1  1
F ( s) = ∫ f (t ) e dt = ∫ e dt = − e − st = − e −∞ −  − e0  = − ⋅ 0 −  − ⋅ 1 = , Re( s ) > 0
− st − st

0 0 s 0 s  s  s  s  s

Example 2: Find the Laplace transform of an exponential function f (t) = e – a t.

∞ ∞ ∞ ∞
1 − ( s + a )t 1
F (s) = ∫ f (t )e − st dt = ∫ e −at e − st dt = ∫ e −( s + a )t dt = − e = , Re( s ) > −a
0 0 0 s + a 0 s + a

18
Prepared by Ben M. Chen
Inverse Laplace Transform

Given a frequency-domain function F(s), the inverse Laplace transform is to convert it back
to its original time-domain function f (t).

Here are some very useful Laplace and inverse Laplace transform pairs:

f (t ) ⇔ F ( s) f (t ) ⇔ F ( s)

1 a
1 ⇔ sin at ⇔
s s2 + a 2

1 s
t ⇔ cos at ⇔
s2 s2 + a 2

1 b
e−at ⇔ e−at sin bt ⇔
s+a (s + a )2 + b2
1 s+a
te−at ⇔ e −at cos bt ⇔
(s + a )2 (s + a )2 + b2
19
Prepared by Ben M. Chen
Some useful properties of Laplace transform:

1. Superposition:

L{a1 f1 (t ) + a2 f 2 (t )} = a1L{ f1 (t )}+ a2 L{ f 2 (t )} = a1F1 ( s ) + a2 F2 ( s )

2. Differentiation: Assume that f (0) = 0.

 df (t ) 
L { }
 = L f& (t ) = sL{ f (t )} = sF ( s )
 dt 

 d 2 f (t ) 
L 2
{ }
 = L &f&(t ) = s L{ f (t )} = s F ( s )
2 2

 dt 
3. Integration:
t  1
L  ∫ f (ζ )dζ  = L{ f (t )} = F ( s )
1
0  s s
20
Prepared by Ben M. Chen
Re-express ODE Models using Laplace Transform (Transfer Function)

Recall that the mechanical system in the cruise-control problem with m = 1 can be
represented by an ODE:
v& + bv = u

Taking Laplace transform on both sides of the equation, we obtain

L{ v& + bv} = L{ u} ⇒ L{ v&}+ L{ bv} = L{ u}

⇒ sL{ v}+ bL{ v} = L{ u} ⇒ sV ( s) + bV ( s ) = U ( s )

⇒ (s + b) V ( s) = U ( s) ⇒
V ( s)
=
1
= G (s )
U ( s) s+b

This is called the transfer function of the system model 21


Prepared by Ben M. Chen
A cruise-control system in frequency domain:

REFERENCE INPUT OUTPUT


Controller
+

R (s) U (s) 1 speed V (s)


driver? auto? G( s) =
+ s+b

22
Prepared by Ben M. Chen
In general, a feedback control system can be represented by the following block diagram:

R (s) E (s) U (s) Y (s)


K (s) G (s)
+ –

Given a system represented by G(s) and a reference R(s), the objective of control system
design is to find a control law (or controller) K(s) such that the resulting output Y(s) is as
close to reference R(s) as possible, or the error E(s) = R(s) –Y(s) is as small as possible.
However, many other factors of life have to be carefully considered when dealing with real-
life problems. These factors include:
uncertainties
disturbances noises

R (s) E (s)
K (s) G (s)
+ – U (s) Y (s)
nonlinearities
23
Prepared by Ben M. Chen
Control Techniques – A Brief View:

There are tons of research published in the literature on how to design control laws for various
purposes. These can be roughly classified as the following:

♦ Classical control: Proportional-integral-derivative (PID) control, developed in 1940s and used


for control of industrial processes. Examples: chemical plants, commercial aeroplanes.

♦ Optimal control: Linear quadratic regulator control, Kalman filter, H2 control, developed in
1960s to achieve certain optimal performance and boomed by NASA Apollo Project.

♦ Robust control: H∞ control, developed in 1980s & 90s to handle systems with uncertainties
and disturbances and with high performances. Example: military systems.

♦ Nonlinear control: Currently hot research topics, developed to handle nonlinear systems
with high performances. Examples: military systems such as aircraft, missiles.

♦ Intelligent control: Knowledge-based control, adaptive control, neural and fuzzy control, etc.,
researched heavily in 1990s, developed to handle systems with unknown models.
Examples: economic systems, social systems, human systems. 24
Prepared by Ben M. Chen
Classical Control

Let us examine the following block diagram of control system:

R (s) E (s) U (s) Y (s)


K (s ) G (s)
+ –

Recall that the objective of control system design is trying to match the output Y(s) to the
reference R(s). Thus, it is important to find the relationship between them. Recall that
Y ( s)
G (s) = ⇒ Y ( s ) = G ( s )U ( s )
U ( s)
Similarly, we have U ( s ) = K ( s ) E ( s ), and E ( s ) = R ( s ) − Y ( s ) . Thus,

Y ( s ) = G ( s )U ( s ) = G ( s ) K ( s ) E ( s ) = G ( s ) K ( s ) [R ( s ) − Y ( s ) ]

Y ( s ) = G ( s ) K ( s ) R ( s ) − G ( s ) K ( s )Y ( s ) ⇒ [1 + G ( s ) K ( s ) ] Y ( s ) = G ( s ) K ( s ) R ( s )
Y (s) G (s) K (s)
⇒ H ( s) = = Closed-loop transfer function from R to Y.
R(s) 1 + G (s) K (s) 25
Prepared by Ben M. Chen
Thus, the block diagram of the control system can be simplified as,

R (s) G (s) K (s) Y (s)


H (s) =
1 + G (s) K (s)

The whole control problem becomes how to choose an appropriate K(s) such that the
resulting H(s) would yield desired properties between R and Y.

b
We’ll focus on control system design of some first order systems G ( s ) = with a
s+a
ki k p s + ki
proportional-integral (PI) controller, K ( s ) = k p + = . This implies
s s

G (s) K (s) bk p s + bk i
H (s) = = 2
1 + G ( s ) K ( s ) s + ( a + bk p ) s + bk i

The closed-loop system H(s) is a second order system as its denominator is a polynomial s
of degree 2.
26
Prepared by Ben M. Chen
Stability of Control Systems e − at ⇔
1
s+a
Example 1: Consider a closed-loop system with,
0.5
R (s) = 1 Y (s) 0.5 e −t ⇔
1 s +1
H (s) =
s2 − 1 &
We have 0.5 et ⇔
0.5
s −1
1 1 0 .5 0 .5
Y (s) = H ( s)R (s) = = = −
s2 − 1 ( s + 1)( s − 1) s −1 s +1

Using the Laplace transform table, we obtain y ( t ) = 0 . 5( e t − e − t )

12000

10000 This system is said to be unstable because the


8000 output response y(t) goes to infinity as time t is
y (t ) 6000
getting larger and large. This happens because
the denominator of H(s) has one positive root at
4000

2000
s = 1.
0
0 2 4 6 8 10
Time (seconds) 27
Prepared by Ben M. Chen
Example 2: Consider a closed-loop system with,
R (s) = 1 1 Y (s)
H (s) = 2
s + 3s + 2

We have
1 1 1 1
Y (s) = H (s)R(s) = = = −
s + 3s + 2
2
( s + 1)( s + 2 ) s +1 s+2

Using the Laplace transform table, we obtain y ( t ) = e − t − e −2 t

0.25

0.2 This system is said to be stable because


the output response y(t) goes to 0 as time
0.15
y (t )
t is getting larger and large. This happens
0.1
because the denominator of H(s) has no
0.05 positive roots.

0
0 2 4 6 8 10
Time (seconds)
28
Prepared by Ben M. Chen
We consider a general 2nd order system,
R (s) = 0 Y (s)
ω n2
H (s) = 2
s + 2ζω n s + ω n2

The system is stable if the denominator of the system, i.e., s 2 + 2ζω n s + ω n2 = 0 , has no
positive roots. It is unstable if it has positive roots. In particular,
Marginally Stable

Stable

Unstable

29
Prepared by Ben M. Chen
Stability in the State Space Representation

Consider a general linear system characterized by a state space form,

 x& = A x + B u

y =C x + Du
Then,
1. It is stable if and only if all the eigenvalues of A are in the open left-half plane.

2. It is marginally stable if and only if A has eigenvalues are in the closed left-half
plane with some (simple) on the imaginary axis.

3. It is unstable if and only if A has at least one eigenvalue in the right-half plane.

L.H.P. R.H.P.

Stable Region Unstable Region

30
Prepared by Ben M. Chen
Lyapunov Stability

Consider a general dynamic system, x& = f (x ) . If there exists a so-called Lyapunov function
V(x), which satisfies the following conditions:

1. V(x) is continuous in x and V(0) = 0;

2. V(x) > 0 (positive definite);

3. V& ( x ) = ∂V f ( x ) < 0 (negative definite),


∂x
then we can say that the system is asymptotically stable at x = 0. If in addition,

V ( x ) → ∞, as x → ∞

then we can say that the system is globally asymptotically stable at x = 0. In this case, the
stability is independent of the initial condition x(0).

31
Prepared by Ben M. Chen
Lyapunov Stability for Linear Systems

Consider a linear system, x& = A x . The system is asymptotically stable (i.e., the eigenvalues
of matrix A are all in the open RHP) if for any given appropriate dimensional real positive
definite matrix Q = QT > 0, there exists a real positive definite solution P = PT > 0 for the
following Lyapunov equation:

AT P + PA = −Q
Proof. Define a Lyapunov function V ( x ) = x T P x . Obviously, the first and second conditions
on the previous page are satisfied. Now consider

V& ( x ) = x& T P x + x T P x& = ( A x ) T P x + x T P A x = x T ( AT P + PA)x = − x TQx < 0

Hence, the third condition is also satisfied. The result follows.

≥ 0 and  A, Q 2  being
 1
Note that the condition, Q = QT > 0, can be replaced by Q = QT
 
detectable. 32
Prepared by Ben M. Chen
Behavior of Second Order Systems with a Step Inputs

Again, consider the following block diagram with a standard 2nd order system,

R (s) = 1/s Y (s)


ω n2
H (s) = 2
r=1 s + 2ζω n s + ω n2

The behavior of the system is as follows:

The behavior of the system is


fully characterized by ζ ,
which is called the damping
ratio, and ωn , which is called
the natural frequency.

33
Prepared by Ben M. Chen
Control System Design with Time-domain Specifications

R (s) = 1/s Y (s)


ω n2
H (s) = 2
r=1 s + 2ζω n s + ω n2

overshoot M p

rise time
1 .8 tr
tr ≅ t
ωn ts ts ≅
4 .6
1% settling time ζω n
34
Prepared by Ben M. Chen
PID Design Technique:

R (s) E (s) U (s) Y (s)


K (s) G (s )
+ –

b k k s + ki
with G ( s ) = and K ( s ) = k p + i = p results a closed-loop system:
s+a s s

Y ( s) G ( s)K ( s) bk p s + bk i
H (s) = = = 2
R ( s ) 1 + G ( s ) K ( s ) s + ( a + bk p ) s + bk i

Compare this with the standard 2nd order system: 2ζω n − a


kp =
2ζω n = a + bk p b
ω n2
H (s) = 2 ω n2 = bk i ω n2
s + 2ζω n s + ω n2 ki =
b
The key issue now is to choose parameters kp and ki such that the above resulting system
has desired properties, such as prescribed settling time and overshoot. 35
Prepared by Ben M. Chen
Cruise-Control System Design
1
V ( s) m
Recall the model for the cruise-control system, i.e., = . Assume that the
U (s) s + b
m
mass of the car is 3000 kg and the friction coefficient b = 1. Design a PI controller for it
such that the speed of the car will reach the desired speed 90 km/h in 10 seconds (i.e., the
settling time is 10 s) and the maximum overshoot is less than 25%.

To achieve an overshoot less than 25%, we obtain


from the figure on the right that ζ > 0 . 4

To be safe, we choose ζ = 0 . 6

To achieve a settling time of 10 s, we use x

4 .6 4 .6 4 .6
ts = ⇒ ωn = = = 0 . 767
ζω n ζ ts 0 . 6 × 10
36
Prepared by Ben M. Chen
The transfer function of the cruise-control system,
1 1
Y (s) m = 3000
G( s) = = ⇒ a=b= 1 = 0.000333
U ( s) s + b s+ 1 3000
m 3000
Again, using the formulae derived,

2ζω n − a 2ζω n − a 2 × 0 . 6 × 0 . 767 − 1 / 3000


kp = kp = = = 2760
b b 1 / 3000
ω n2 ω n2 0 . 767 2
ki = ki = = = 1765
b b 1 / 3000

The final cruise-control system:

Reference
90 km/h Speed
1765
2760 +
+ s

37
Prepared by Ben M. Chen
Simulation Result:

120

The resulting
100 overshoot is
less than 25%
80

and the settling


S p e e d in km/h

60 time is about 10
seconds.
40

Thus, our
20
design goal is
achieved.
0
0 2 4 6 8 10 12 14 16 18 20
Tim e in S e c o n d s

38
Prepared by Ben M. Chen
Bode Plots

Consider the following feedback control system,

r e y r e y
K (s) G(s) K(s) G(s)
+ – + –

Bode Plots are the the magnitude and phase responses of the open-loop transfer function,
i.e., K(s) G(s), with s being replaced by jω. For example, for the ball and beam system we
considered earlier, we have

3 .7 + 2 .3s 3 . 7 + j 2 . 3ω
K ( s ) G ( s ) s = jω = (0 . 37 + 0 . 23 s )
10
= =
s2 s = jω s2 s = jω −ω2

3 . 7 2 + ( 2 . 3ω ) 2  2 . 3ω 
K ( j ω )G ( j ω ) = , ∠ K ( j ω ) G ( j ω ) = tan −1   − 180
o

ω 2
 3 .7 
39
Prepared by Ben M. Chen
Bode magnitude and phase plots of the ball and beam system:

60

40

Magnitude (dB)
20

-20
-1 0 1
10 10 10
Frequency (rad/sec)

-80

-100
Phase (degrees)

-120

-140

-160

-180
-1 0 1
10 10 10
Frequency (rad/sec)

40
Prepared by Ben M. Chen
Gain and phase margins

20
gain
0 margin
Magnitude (dB)
-20

-40

-60
-1 0 1
10 10 10

gain
Frequency (rad/sec)

crossover
frequency
0 phase
-50
crossover
frequency
Phase (degrees)

-100

-150

-200

phase
margin
-250
-1 0 1
10 10 10
Frequency (rad/sec)

41
Prepared by Ben M. Chen
Nyquist Plot

Instead of separating into magnitude and phase diagrams as in Bode plots, Nyquist plot
maps the open-loop transfer function K(s) G(s) directly onto a complex plane, e.g.,

1.5

0.5
Imag Axis

-0.5

-1

-1.5
-0.5 0 0.5 1 1.5
Real Axis
42
Prepared by Ben M. Chen
Gain and phase margins

The gain margin and phase margin can also be found from the Nyquist plot by zooming in
the region in the neighbourhood of the origin.

Remark: Gain margin is the maximum

1 additional gain you can apply to the


GM
closed-loop system such that it will still
–1
remain stable. Similarly, phase margin
PM
is the maximum phase you can tolerate
to the closed-loop system such that it

Mathematically, will still remain stable.

1
GM = , where ω p is such that ∠ K ( jω p )G ( jω p ) = 180o
K ( jω p )G ( jω p )

PM = ∠ K ( jω g )G ( jω g ) + 180 o , where ω g is such that K ( jω g )G ( jω g ) = 1


43
Prepared by Ben M. Chen
Example: Gain and phase margins of the ball and beam system: PM = 58°, GM = ∞

60

40

Magnitude (dB)
20

-20
-1 0 1
10 10 10
Frequency (rad/sec)

-80

-100
Phase (degrees)

-120

-140

-160

-180
-1 0 1
10 10 10
Frequency (rad/sec)

44
Prepared by Ben M. Chen

Potrebbero piacerti anche