00 mi piace00 non mi piace

13 visualizzazioni143 pagineControl Systems Course Notes

Jan 15, 2019

© © All Rights Reserved

PDF, TXT o leggi online da Scribd

Control Systems Course Notes

© All Rights Reserved

13 visualizzazioni

00 mi piace00 non mi piace

Control Systems Course Notes

© All Rights Reserved

Sei sulla pagina 1di 143

Bruce A. Francis

Electrical and Computer Engineering Department

University of Toronto

Contents

Preface 3

1 Introduction 4

1.1 Familiar examples of control systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.2 What is in this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.3 MATLAB and Scilab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.4 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.1 Block diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.2 State equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.3 Sensors and actuators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

2.4 Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

2.5 Interconnections of linear subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

2.6 “Nothing is as abstract as reality” . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

2.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3.1 Definition of the Laplace transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

3.2 Linearity and convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

3.3 Pole locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

3.4 Bounded signals and the final-value theorem . . . . . . . . . . . . . . . . . . . . . . . 53

3.5 Transfer functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

3.6 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

3.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

4.1 The cart-pendulum example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

4.2 The standard feedback loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

4.3 Tracking a reference signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

4.4 Principle of the argument . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

4.5 Nyquist stability criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

4.6 Examples of drawing Nyquist plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

4.7 Using the Nyquist criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

4.8 Bode plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

1

4.9 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

5.1 Loopshaping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

5.2 Phase lag controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

5.3 Phase lead controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

5.4 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

5.5 Case Study: Buck Converter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

5.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

Preface

This first course in control systems is aimed at students in electrical, mechanical, and aerospace

engineering. Such students will have taken linear algebra, calculus, physics (mechanics), and electric

circuits. Electric circuits is ideal background because its tools include differential equations, Laplace

transforms, and transfer functions. This control course is normally given in the third or fourth

undergraduate year.

The subject of this first course is classical control. That means the systems to be controlled

are single-input, single-output and stable except possibly for a single pole at the origin, and design

is done in the frequency domain, usually with Bode diagrams. A second undergraduate course,

sometimes called modern control although that term is out of date, would normally be a state-space

course, meaning multi-input, multi-output and possibly unstable plants, with controllers being

observer based or optimal with respect to a linear-quadratic criterion. The consensus of opinion

among educators seems to be that every electrical engineering graduate should know the rudiments

of control systems, and that the rudiments reside in classical control.

However, state models, normally relegated to the second course, play an important role in the

first third of this course because they are very efficient and unifying. From a nonlinear state model,

it is easy to linearize at an equilibrium just by computing Jacobians. It is also natural to introduce

the important concept of stability in the state equation framework.

On the other hand, classical control is a course that follows electric circuit theory in the standard

curriculum. Circuit theory introduces phasors, i.e., sinusoidal steady state, Laplace transforms,

transfer functions, poles and zeros, and Bode plots. Classical control should build on that base, and

therefore transfer functions and their poles and zeros play a major role. In particular, pole locations

are used, not only in the final-value theorem, but also to characterize boundedness of signals. This is

possible because only signals having rational Laplace transforms are considered. This is, of course,

a severe restriction, but is justified under the belief that undergraduate engineering students should

not be burdened with unfamiliar mathematics just for the sake of generality.

The main topics of the course are modeling of physical systems and obtaining state equations,

linearization, Laplace transforms, how poles are related to signal behaviour, transfer functions, sta-

bility, feedback loops and stability, the Nyquist criterion, stability margins, elementary loopshaping,

and limitations to performance. There is also a case study of a DC-DC converter. The book is

intended to be just long enough to cover classical control.

To students

This is a first course on the subject of feedback control. For many of you it will be your one and

only course on the subject, and therefore the aim is to give both some breadth and some depth.

From this course it is hoped that you learn the following:

3

– The value of block diagrams.

– How to find equilibria and linearize the model at an equilibrium (if there is one).

– The beautiful Nyquist stability criterion and the meaning of stability margin.

Thanks

This book logically precedes the book Feedback Control Theory that I co-wrote with John Doyle

and Allen Tannenbaum. I’m very grateful to them for that learning experience.

I first taught classical control at the University of Waterloo during 1982 - 84. In fact, because of

the co-op program I taught it (EE380) six times in three years. Thanks to the many UW students

who struggled while I learned how to teach. In 1984 I came back to the University of Toronto and

I got to teach classical control to Engineering Science students. What a great audience!

Chapter 1

Introduction

environment—in short, no technology. Control systems are what make machines, in the broadest

sense of the term, function as intended. This course is about the modeling of physical systems and

the analysis and design of feedback control systems.

We begin with some examples that everyone knows. A familiar example of a control system is

the one we all have that regulates our internal body temperature at 37◦ C. As we move from a

warm room to the cold outdoors, our body temperature is maintained, or regulated. Nature has

many other interesting examples of control systems: Large numbers of fish move in schools, their

formations controlled by local interactions; the population of a species, which varies over time, is a

result of many dynamic interactions with predators and food supplies. And human organizations

are controlled by regulations and regulatory agencies.

Other familiar examples of control systems: autofocus mechanism in cameras, cruise control and

anti-lock brakes in cars, thermostat temperature control systems. The Google car is perhaps the

most impressive example of this type: You get in your car, program your destination, and your car

drives you there safely under computer control.

Systems and control is an old subject. That is because when machines were invented, their con-

trollers had to be invented too, for otherwise the machines would not work. The control engineering

that is taught today originated with the invention of electric machines—motors and generators.

Today the subject is taught in the following way. First comes electric circuit theory. That is

signals and systems for a specific system. Then comes abstract signals and systems theory. And

then come the applications of that theory, namely communication systems and control systems. So

one must view the subject of feedback control as part of the chain

The most important principle in control is feedback. This doesn’t mean just that a control

system has sensors and that control action is based on measured signals. It means that the signal to

4

be controlled is sensed, compared to a desired reference, and action then taken based on the error.

This is the structure of a feedback loop. And because the action is based on the error, instability can

result1 If unstable, the machine is not operable. This raises the fundamental problem of feedback

control: To design a feedback controller that acts fast enough and provides acceptable performance,

including stability, in spite of unavoidable modelling errors. This, then, is what the course is about:

4. Having a mathematical tool to test for stability of a feedback loop. For us that is the Nyquist

criterion.

MATLAB, which stands for matrix laboratory, is a commercial software application for matrix

computations. It has related toolboxes, one of which is the Control Systems Toolbox. Scilab is

similar but with the noteworthy difference that it is free. These tools, MATLAB especially, is used

in industry for control system analysis and design. You must become familiar with it. It is a script

language and easy to learn. There are tutorials on the Web, for example at the Mathworks website.

1.4 Notation

Generally, signals are written in lower case, for example, x(t). Their transforms are capitalized:

X(s) is the Laplace transform of x(t). Resistance values, masses, and other physical constants are

capitals: R, M , etc. In signals and systems the unit step is usually denoted u(t), but in control u(t)

denotes a plant input. Following Zemanian2 , we denote the unit step by 1+ (t).

1.5 Problems

1. Find on the Web three examples of control systems not mentioned in this chapter. Make your

list as diverse as possible.

2. Imagine yourself flying a model helicopter with a hand-held radio controller. Draw the block

diagram with you in the loop. (We will formally do block diagrams in the next chapter. This

problem is to get you thinking, although you are free to read ahead.)

1

Try to balance a pencil vertically up on your hand. You won’t be able to do it.

2

Distribution Theory and Transform Analysis, A.H. Zemanian, Dover

3. A fourth-year student who took ECE311 last year has designed an amazing device. It’s a

wooden box of cube shape that can balance itself on one of its edges; while it is balanced on

the edge, if you tapped it lightly it would right itself. How does it work? That is, draw a

schematic diagram of your mechatronic system.

4. Historically, control systems go back at least to ancient Greek times. More recently, in 1769

a feedback control system was invented by James Watt: the flyball governor for speed control

of a steam engine. Sketch the flyball governor and explain in a few sentences how it works.

(Stability of this control system was studied by James Clerk Maxwell, whose equations you

know and love.)

5. The topic of vehicle formation is of current research interest. Consider two cars that can move

in a horizontal straight line. Let car #1 be the leader and car #2 the follower. The goal is

for car #1 to follow a reference speed and for car #2 to maintain a specified distance behind.

Discuss how this might be done.

Solution In broad terms, we need to control the speed of car #1 and then the distance from

car #2 to car #1. Regulating the speed of a car at a desired constant value is cruise control. So

a cruise controller must be put on car #1. With this in place, on car #2 we need a proximity

controller to regulate the speed of car #2 so that the distance requirement is achieved.

Let us fill in some details and bring in some ideas that will come later in the course. The

best way to work on a problem like this is to try to get a block diagram. Figure 1.1 has two

diagrams. The top one is a schematic diagram where the two cars are represented as boxes

on wheels. They are moving to the right. The symbols y1 and y2 represent their positions

with respect to a reference point; any reference point will do. These positions are functions of

time. The symbols u1 and u2 represent forces applied to the cars, forces produced by engines

of some type. Let the velocities of the two cars be denoted v1 and v2 . Presumably, the forces

are to be used to achieve the two given specifications, v1 equals some desired constant vref ,

and y1 − y2 equals some desired value dref .

Back to Figure 1.1, the bottom diagram is a block diagram, where blocks denote subsystems

or functional relationships and where arrows denote signals. Let us begin with the part of

the diagram containing the symbol u1 . This is a force, and you know from Newton’s second

law that force equals mass times acceleration. So if we integrate force and divide by the mass

we get the velocity v1 . This is shown in the diagram by the block with input u1 and output

v1 . The desired value of v1 is vref . The feedback principle, as we will learn in this course, is

to define the velocity error ev = vref − v1 , and then to take corrective action: reduce ev if it

is positive, increase ev if it is negative. We can increase or decrease ev only by increasing or

decreasing the force u1 . This is shown in the block diagram by a block, the controller, with

input ev and output the force u1 . Finally, the lower loop is a feedback loop whose goal is to

get dy , defined as y2 − y1 , equal to dref . The controller from ey to u2 has to be designed to

do that.

y1

y2

u2 #2 u1 #1

ev u1 v1 y1

vref

ey u2 v2 y2

dref

dy

Figure 1.1: The two-cars problem. Top: A schematic diagram of the two cars. Bottom: A block

diagram.

Chapter 2

Systems

It is part of being human to try to understand the world, natural and human-designed, and our

way of understanding something is to formulate a model of it. All of physics—Kepler’s laws of

planetary motion, Newton’s laws of motion, his law of gravity, Maxwell’s laws of electromagnetism,

the Navier-Stokes equations of fluid flow, and so on—all are models of how things behave. Control

engineering is an ideal example of the process of modelling and design of models.

In this chapter we present a common type of model called state equations. The nonlinear form

is

ẋ = f (x, u)

y = h(x, u)

ẋ = Ax + Bu

y = Cx + Du.

Here u, x, y are vectors that are functions of time. The dot, ẋ, signifies derivative with respect to

time. The components of the vector u(t) are the inputs to the system; an input is an independent

signal. The components of y(t) are the outputs, the dependent signals we are interested in. And

the vector x(t) is the state; its purpose and meaning will be developed in the chapter. The four

symbols A, B, C, D are matrices with real coefficients. These equations are linear, because Ax + Bu

and Cx + Du are linear functions of (x, u), and time-invariant, because the coefficient matrices

A, B, C, D do not depend on time.

The importance of block diagrams in control engineering cannot be overemphasized. One could

easily argue that you don’t understand your system until you have a block diagram of it. This

section teaches how to draw block diagrams.

8

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 9

y y

x x

Figure 2.1: The curve on the left is the graph of a function; the curve on the right is not.

x y

f

1. Block diagram of a function. Let us recall what a function is. If X and Y are two sets, a

function from X to Y is a rule that assigns to each element of X a unique element of Y . The

terms function, mapping, and transformation are synonymous. The notation

f : X −→ Y

means thatX and Y are sets and f is a function from X to Y . We typically write y = f (x)

for a function. To repeat, for each x there must be a unique y such that y = f (x); y1 = f (x)

and y2 = f (x) with y1 6= y2 is not allowed.1 Now let f be a function R −→ R. This means

that f takes a real variable x and assigns a real variable y, written y = f (x). So f has a

graph in the (x, y) plane. For f to be a function, every vertical line must intersect the graph

in a unique point, as shown in Figure 2.1. Figure 2.2 shows the block diagram of the function

y = f (x). Thus a box represents a function and the arrows represent variables; the input is

the independent variable, the output the dependent variable.

2. Electric circuit example. Figure 2.3 shows a simple RC circuit with voltage source u. We

consider u to be an independent variable and the capacitor voltage y to be a dependent

variable. Thinking of the circuit as a system, we view u as the input and y as the output. Let

us review how we could compute the output knowing the input. The familiar circuit equation

is

RC ẏ + y = u.

Since this is a first-order differential equation, given a time t > 0, to compute the voltage y(t)

at that time we would need an initial condition, say y(0), together with the input voltage u(τ )

1

The term multivalued function is sometimes used if two different values are allowed, but we shall not use that

term.

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 10

+ R +

u C y

u y

y = f (u, y(0)),

which says that the signal y is a function of the signal u and the initial voltage y(0). A block

diagram doesn’t usually show initial conditions, so for this circuit the block diagram would

be Figure 2.4. Inside the box we could write the differential equation or the transfer function,

which you might remember is 1/(RCs + 1). Here’s a subtle point: In the block diagram

should the signals be labelled u and y or u(t) and y(t)? Both are common, but the second

may suggest that y at time t is a function of u only at time t and not earlier. This, of course,

is false.

3. Mechanics example. The simplest vehicle to control is a cart on wheels. Figure 2.5 depicts

the situation where the cart can move only in a straight line on a flat surface. There are two

Brief Article

arrows in the diagram. One represents a force applied to the cart; this has the label u, which is

a force in Newtons. The direction of the arrow is just a reference direction that we are free to

choose. With the arrow as shown, if u is positive, then the force is to The

the right;

Authorif u is negative,

the force is to the left. The second arrow, the one with a barb on it, depicts the position of

the center of mass of the cart measured from a stationary referenceDecember 7, The

position. 2007symbol y

The Author

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 11

December 7, 2007

u y

u y u y

v v

Figure 2.7: The first summing junction stands for y = u + v; the second for y = u − v.

stands for position in meters. This is a schematic diagram, not a block diagram, because

it doesn’t say which of u, y causes the other. Newton’s second law tells us that there’s a

mathematical relationship between u and y, namely, u = M ÿ, that is, force equals mass times

acceleration. We take the viewpoint that u is an independent variable, and thus it is viewed

as an input. Recall that M ÿ = u is a second-order differential equation. Given the forcing

term u(t), we need two initial conditions, say, position y(0) and velocity ẏ(0), to be able to

solve for y(t). More specifically, for a fixed time t > 0, in order to compute y(t) we would

need y(0) and ẏ(0) and also u(τ ) over the time range from τ = 0 to τ = t. So symbolically

we have

Again, we leave the initial conditions out of the block diagram to get Fig. 2.6. Inside the box

Brief Article

we could put the differential equation or the transfer function, which is 1/(M s2 )—the Laplace

transform of y divided by the Laplace transform of u. If you don’t see this, don’t worry—we’ll

do transfer functions in detail later.

The Author

4. Summing junctions. Block diagrams also may have summing junctions, as in Figure 2.7. Also,

we may need to allow a block to have more than one input, as in Figure 2.8. This means that

y is a function of u and v, y = f (u, v).

December 7, 2007

1

u

y

v

Brief Article

The Author

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 12

December 7, 2007

d roll on board

θ

flat board

on a fulcrum Brief Article

τ

The Author

Brief Article

Figure 2.9: A can rolls on a board.

December 7, 2007 The Author

December 7, 2007

θ

θ

τ

d

d

τ

5. Multi-output example. Figure 2.9 shows a can rolling on a see-saw. Suppose a torque τ is

applied to the board. Let θ denote the angle of tilt and d the distance of roll. Then both θ

and d are functions of τ . The block diagram could be either of the ones in Figure 2.10.

6. Water tank. Figure 2.11 shows a water tank. The arrow indicates water flowing out. Try to

draw the block diagram; the answer is in the footnote.2

1

7. Cart and motor drive. This is a harder example. Consider a cart with a motor drive, a DC

motor that produces a torque. See Figure 2.12. The input is the voltage u to the motor, the

output the cart position y. We want the model from u to y. To model the inclusion of a

motor, draw the free body diagram in Figure 2.13. Moving from right to left, we have a force

2

There are no input signals. Assuming the geometry of the tank is fixed and considering there is an initial time,

say t = 0, the flow rate out at any time t > 0 depends uniquely on the height of water at time t = 0. Therefore, if we

let y(t) denote the flowrate out, the block diagram would be a single box, no input, one output labelled y.

1

1

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 13

u

y

motor

u

cart

wheel

y

τ

f

θ τ f f

M ÿ = f.

For the wheel, an equal and opposite force f appears at the axle and a horizontal force occurs

where the wheel contacts the floor. If the inertia of the wheel is negligible, the two horizontal

forces are equal. Finally, there is a torque τ from the motor. Equating moments about the

axle gives

τ = f r,

where r is the radius of the wheel. Now we turn to the motor. The electric circuit equation is

di

L + Ri = u − vb ,

dt

where vb is the back emf. The torque produced by the motor satisfies

τm = Ki.

J θ̈ = τm − τ.

vb = Kb θ̇.

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 14

⌧ f

(6) (5)

u i m ✓˙ ✓ y

(1) (2) (3) (4)

vb

(7)

di

(1) L + Ri = u vb (4) y = r✓ (7) vb = Kb ✓˙

dt

(2) ⌧m = Ki (5) f = M ÿ

d ˙

(3) J ✓ = ⌧m ⌧ (6) ⌧ = rf

dt

R

u y

y = rθ.

8. Occasionally we might want to indicate that a parameter is a variable, or that it can be set at

different values. For example, suppose that in the RC circuit in Figure 2.3 several values of

R are available for us to select. The the value R ican be selected from an exteral power (us),

and we can indicate this by an arrow going through the box as in Figure 2.15.

9. Summary. A block diagram is composed of arrows, boxes, and summing junctions. The

arrows represent signals. The boxes represent systems or system components; mathematically

they are functions that map one or more signals to one or more other signals. An exogenous

input to a block diagram, e.g., u in Figure 2.4, is an independent variable. Other signals are

dependent variables.

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 15

State equations are fundamental to the subject of dynamical systems. In this section we look at a

number of examples. We begin with the notion of linearity.

1. The concept of linearity. To say that a function f : R −→ R is linear means the graph is a

straight line through the origin; there’s only one straight line that is not allowed—the y-axis.

Thus y = ax defines a linear function for any real constant a; the equation defining the y-axis

is x = 0. The function y = 2x + 1 is not linear—its graph is a straight line, but not through

the origin. In your linear algebra course you were taught that a linear function, or linear

transformation, is a function f from a vector space X to another (or the same) vector space

Y having the property

for all vectors x1 , x2 in X and all real numbers3 a1 , a2 . If the vector spaces are X = Rn ,

Y = Rm , and if f is linear, then it has the form f (x) = Ax, where A is an m × n matrix.

Conversely, every function of this form is linear. This is a useful fact, so let us record it as the

next item. We emphasize that Rn denotes the vector space of n-dimensional column vectors,

and so the basis is fixed.

f (x) = Ax, then f is linear. Conversely, if f is a linear function from Rn to Rm , then there

is a unique m × n matrix A such that f (x) = Ax.

Conversely, suppose f is linear. We are going to build the matrix A column by column. Let

e1 denote this vector of dimension n:

1

0

e1 = 0 .

..

.

0

Take the first column of A to be f (e1 ). Take the second column to be f (e2 ), where

0

1

e2 = 0 .

..

.

0

3

This definition assumes that the vector spaces are over the field of real numbers.

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 16

And so on. Then certainly f (x) = Ax for x equal to any of the vectors e1 , e2 , . . . . But a

general x has the form

a1

a2

a3

x= = a1 e1 + · · · + an en .

..

.

an

By linearity,

a1

a2

f (e1 ) f (e2 ) · · · f (en ) . .

..

an

4. Extension. This concept of linear function extends beyond vectors to signals. In this book a

signal is a function of time. For example, consider a capacitor, whose constitutive law is

dv

i=C .

dt

Here, i and v are not constants, or vectors—they are functions of time. If we think of the

current signal i as a function of the voltage signal v, then the function is linear. This is because

C = a1 C + a2 C .

dt dt dt

On the other hand, if we try to view v as a function of i, then we have a problem, because we

need, in addition, an initial condition v(0) (or at some other initial time) to uniquely define

v, not just i. Let us set v(0) = 0. Then v can be written as the integral of i like this:

t

1

Z

v(t) = i(τ )dτ.

C 0

6. Example. Figure 2.16 shows a cart on wheels, driven by a force u and subject to air resistance.

Typically air resistance creates a force depending on the velocity, ẏ; let us say this force is a

possibly nonlinear function D(ẏ). Assuming M is constant, Newton’s second law gives

M ÿ = u − D(ẏ).

Brief Article

The Author

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 17

December 7, 2007

yy

F

D(ẏ)

M

M

u Brief Article

The Author

u y

P

(x)

Figure 2.17: Block diagram of the cart.

This is a single second-order differential equation. It will be convenient to put it into two

simultaneous first-order equations by defining two so-called state variables, in this example

position and velocity:

x1 := y, x2 := ẏ.

Then

ẋ1 = x2

1 1

ẋ2 = u− D(x2 )

M M

y = x1 . 1

ẋ = f (x, u) (2.1)

y = h(x), (2.2)

where

x1 x2

x= , f (x, u) = 1 1 , h(x) = x1 .

x2 Mu − M D(x2 )

The function f is nonlinear if D is, while h is linear in view of Section 2.2, Paragraph 2 and

h(x) = 1 0 x.

Equations (2.1) and (2.2) constitute a state equation model of the system.

1 The block diagram

is shown in Figure 2.17. Here P is a possibly nonlinear system, u (applied force) is the input,

y (cart position) is the output, and

cart pos’n

x=

cart velocity

The Author

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 18

December 7, 2007

y1 y2

u1 u2

M1 M2

K

is the state of P . (We’ll define state later.) As a special case, suppose the air resistance is a

linear function of velocity:

D(x2 ) = D0 x2 , D0 a constant.

Then f is linear:

0 1 0

f (x, u) = Ax + Bu, A := , B := .

0 −D0 /M 1/M

Defining C = 1 0 , we get the state model

This model is linear; it is also time-invariant because the matrices A, B, C do not vary with

time. Thus the model is linear, time-invariant (LTI).

n-tuples, i.e., ordered lists. For example

1

x1

x := , x = (x1 , x2 ).

x2

where u, x, y are vectors that are functions of time, that is, u(t), x(t), y(t). This model is

nonlinear if f and/or h is nonlinear, but it is time-invariant because neither f nor h depends

directly on time. Denote the dimensions of u, x, y by, respectively, m, n, p.

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 19

should get the state equation, by taking

ẋ = Ax + Bu

y = Cx + Du,

where A is 4 × 4, B is 4 × 2, C is 2 × 4, and D is 2 × 2.

10. Example. A time-varying example is the cart where the mass is decreasing with time because

fuel is being used up (or because the cart is on fire). Newton’s second law in this case is that

force equals the rate of change of momentum, which is mass times velocity. So we have

d

M v = u, ẏ = v.

dt

Expanding the first derivative gives

Ṁ v + M v̇ = u, ẏ = v.

ẋ1 = x2

Ṁ 1

ẋ2 = − x2 + u

M M

y = x1 .

In vector form:

ẋ = A(t)x + B(t)u, y = Cx

" #

0 1 0

A(t) = , B(t) = , C= 1 0 .

0 − Ṁ (t) − M1(t)

M (t)

The matrices A and B are functions of t because M is a function of t. This model is linear

time-varying.

11. Terminology. Explanation of the meaning of the state of a system: The state x at time t

should encapsulate all the system dynamics up to time t, that is, no additional prior informa-

tion is required. More precisely, the concept for x to be a state is this: For any t0 and t1 , with

t0 < t1 , knowing x(t0 ) and knowing the input {u(t) : t0 ≤ t ≤ t1 }, we can compute x(t1 ), and

hence the output y(t1 ).

12. Passive circuit. The customary state variables are inductor currents and capacitor voltages.

For a mechanical system the customary state variables are positions and velocities of all

masses. The reason for this choice is illustrated as follows. Consider Figure 2.19, a cart with

no external applied force. The differential equation model is M ÿ = 0, or equivalently, ÿ = 0.

The Author

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS

December 7, 2007 20

M no force; Brief

no air resistance

Article

The Author

free-body :

K(y − y0 ) D0 ẏ

K D0 y

To solve the equation we need two initial conditions, namely, y(0) and ẏ(0). So the state x

could not simply be the position y, nor could it simply be the velocity ẏ. To determine the

position at a future time, we need both the position and velocity at a prior time. Since the

equation of motion, ÿ = 0, is second order, we need two initial conditions, implying we need

a 2-dimensional state vector.

13. Another mechanical example. Figure 2.20 shows a mass-spring-damper system. The rest

length of the spring is y0 . Figure 2.21 shows the free-body diagram. The1dynamic equation is

M ÿ = u + M g − K(y − y0 ) − D0 ẏ.

K(y y0 ) D0 ẏ

1

Mg u

Brief Article

The Author

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 21

December 7, 2007

x1 M2

θ

L

u

M1

x = (x1 , x2 ), x1 = y, x2 = ẏ,

ẋ1 = x2

1 K K D0

ẋ2 = u+g− x1 + y0 − x2

M M M M

y = x1 .

1

ẋ = Ax + Bu + c, y = Cx, (2.4)

0 1 0 0

A= , B= 1 , C= 1 0 , c= .

−M K

−DM

0

M g+MK

y0

The constant vector c is known, and hence is taken as part of the system rather than as a

signal. Equations (2.4) have the form

14. A favourite toy control problem. The problem is to get a cart automatically to balance a

pendulum, as shown in Figure 2.22. The cart can move in a straight line on a horizontal

table. The position of the cart is x1 referenced to a stationary point. The pendulum, modeled

as a point mass on the end of a rigid rod, is attached to a small rotary joint on the cart, so

that the pendulum can fall either way but only in the direction that the cart can move. There

Brief Article

The Author

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 22

December 7, 2007

x1 + L sin θ

L − L cos θ

F1

M2 g

is a drive mechanism that produces a force u on the cart. The figure shows the pendulum

falling forward. Obviously, the cart has to speed up to keep the pendulum balanced, and the

control problem is to design something that will produce a suitable force. That “something”

is a controller, and how to design it is one of the topics in this book. The natural state is

We bring in a free body diagram, Figure 2.23, for the pendulum. The position of the ball is

shown in a rectangular coordinate system with two axes: one is horizontally to the right; the

other is vertically down. The axes intersect at an origin defined by the conditions x1 = 0 and

θ = 0. Newton’s law for the ball in the horizontal direction is

d2 1

M2 (x1 + L sin θ) = F1 sin θ

dt2

and in the vertical direction (down) is

d2

M2 (L − L cos θ) = M2 g − F1 cos θ.

dt2

The horizontal forces on the cart are u and −F1 sin θ. Thus

M1 ẍ1 = u − F1 sin θ.

These are three equations in the four signals x1 , θ, u, F1 . We have to eliminate F1 . Use the

identities

d2 2 d2

sin θ = θ̈ cos θ − θ̇ sin θ, cos θ = −θ̈ sin θ − θ̇2 cos θ

dt2 dt2

to get

M1 ẍ1 = u − F1 sin θ. (2.7)

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 23

u + M2 Lθ̇2 sin θ

M1 + M2 M2 L cos θ ẍ1

= .

cos θ L θ̈ g sin θ

Solve:

u + M2 Lθ̇2 sin θ − M2 g sin θ cos θ

ẍ1 =

M1 + M2 sin2 θ

θ̈ = .

L(M1 + M2 sin2 θ)

Finally, in terms of the state variables we have

ẋ1 = x3

ẋ2 = x4

u + M2 Lx24 sin x2 − M2 g sin x2 cos x2

ẋ3 =

M1 + M2 sin2 x2

−u cos x2 − M2 Lx24 sin x2 cos x2 + (M1 + M2 )g sin x2

ẋ4 = .

L(M1 + M2 sin2 x2 )

Again, these have the form

ẋ = f (x, u).

x1 x1

y= = = h(x).

θ x2

The system is highly nonlinear; as you would expect, it can be approximated by a linear

system for |θ| small enough, say less than 5◦ .

15. Water tank. Figure 2.24 shows water flowing into a tank in an uncontrolled way, and water

flowing out at a rate controlled by a valve: The signals are x, the height of the water, u, the

area of opening of the valve, and d, the flowrate in. Let A denote the cross-sectional area of

the tank, assumed constant. Then conservation of mass gives

Also

p

(flow rate out) = (const) × ∆p × (area of valve opening),

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 24

x

u

Brief Article

The Author

Figure 2.24: Water tank.

December 17, 2007

+ R C L

u

−

where ∆p denotes the pressure drop across the valve, this being proportional to x. Thus

√

(flow rate out) = c xu

and hence

√

Aẋ = d − c xu.

1 c√

ẋ = f (x, u, d) = d− xu.

A A

16. Exclusions. Not all systems have state models of the form

One example is the differentiator: y = u̇. A second is a time delay: y(t) = u(t − 1). Finally,

there are PDE models, e.g., the vibrating violin string with input the bow force.

17. Another electric circuit example. Consider the RLC circuit in Figure 2.25. There are two

energy storage elements, the inductor and the capacitor. It is natural to take the state variables

to be voltage drop across C and current through L: Figure 2.26. 1Then KVL gives

−u + Rx2 + x1 + Lẋ2 = 0

The Author

December 17, 2007 25

x1

+ −

+

u x2

−

x2 = C ẋ1 .

Thus

1

0 C 0

ẋ = Ax + Bu, A= , B= 1 .

− L1 R

−L L

18. Higher order differential equations. Finally, consider a system with input u(t) and output y(t)

and differential equation

dn y dn−1 y

+ a n−1 + · · · + a0 y = b0 u.

dtn dtn−1

The coefficients ai , b0 are real numbers. We can put this in the state equation form as follows.

Define the state

d2 y dn−1 y

x = (x1 , x2 , . . . , xn ) = y, ẏ, 2 , . . . , n−1 .

dt dt

Then

ẋ1 = x2 1

ẋ2 = x3

..

.

ẋn−1 = xn

ẋn = −an−1 xn − · · · − a0 x1 + b0 u.

The state model for these equations is

ẋ = Ax + Bu, y = Cx,

where

0 1 0 ··· 0 0 0

0 0 1 ··· 0 0 0

.. .. .. .. .. ..

A= , B=

. . . . . .

0 0 0 ··· 0 1 0

−a0 −a1 −a2 · · · −an−2 −an−1 1

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 26

u v v̂

cart sensor

C= 1 0 0 ··· 0 .

The case

dn y dn−1 y dm u dm−1 u

+ an−1 + · · · + a0 y = bm + bm−1 + · · · + b0 u

dtn dtn−1 dtm dtm−1

is somewhat trickier. This has a state model if and only if n ≥ m. We shall return to this

in the next chapter. Briefly, one gets the transfer function from u to y and then gets a state

model from the transfer function.

19. Summary of state equations. Many systems can be modeled in the form

where u, x, y are vectors: u is the input, x the state, and y the output. This model is nonlinear

if either f or h is not linear. However, the model is time invariant because neither f nor h

has t as an argument. The linear time-invariant (LTI) case is

ẋ = Ax + Bu, y = Cx + Du.

In this section we briefly discuss sensors and actuators.

called sensors. A common problem for motion control of mobile robots is to sense the forward

velocity of the robot. This can be done by an optical rotary sensor. If placed on a robot’s

wheel, this gives a fixed number of voltage pulses for each revolution of the wheel, so by

counting the number of pulses per second, you have a measurement of speed. If there are two

wheels on the ends of an axle and each wheel has a rotary sensor, the two wheel turning rates

can be used to determine the heading angle and forward speed.

Suppose the speed of a cart is measured in this way. With v denoting the velocity of the cart,

the output of the sensor measuring the speed would usually be denoted v̂. See Figure 2.27.

For a variety of reasons, all measurements have errors. Notice that physically v̂ may be a

voltage. Although a sensor is a dynamical system itself and therefore could be modelled, it is

common to model it simply as a device that adds noise; see Figure 2.28. The noise can never

be known exactly, so control engineers assume some generic noise signal using common sense

and past experience. For example, one might take random white noise with a certain mean

and a certain variance.

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 27

noise

u v v̂

cart Brief Article

The Author

Figure 2.28: Position measurement modelled via additive noise.

December 17, 2007

y

∆y

slope = 3

1

3

x

1 ∆x

Figure 2.29: Graph of the nonlinear function y = f (x) and its linearization at the point x0 .

2. Actuators. A sensor is typically at the output of the plant. At the input may be an actuator.

Example: Paragraph 7 in Section 2.1 studies a cart with a motor drive. The force on the cart

has to be produced by something, a motor in this case, and this motor is an example of an

actuator. The actuator may be modelled or not. For example, in the motor-cart system,

if the time-constant of the motor is much smaller than the fastest time-constant of the cart,

then the dynamics of the motor can be neglected.

2.4 Linearization

Recap: Many systems can be modeled by nonlinear state equations of the form

where u, x, y are vectors. There might be disturbance inputs present, but for now we suppose they

are lumped into u. There are techniques for controlling nonlinear systems, but that is an advanced

subject. Fortunately, many systems can be linearized about 1 an equilibrium point. In this section

we see how to do this.

1. Example. Linearizing just a function, not a dynamical system model. Let us linearize the

function y = f (x) = x3 about the point x0 = 1. Figure 2.29 shows how to do it. At x = x0 ,

the value of y is y0 = f (x0 ) = 1. If x varies in a small neighbourhood of x0 , then y varies

in a small neighbourhood of y0 . The graph of f near the point (x0 , y0 ) can be approximated

by the tangent to the curve, as shown in the left-hand figure. The slope of the tangent is the

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 28

∆y

≈ f 0 (x0 ) = 3.

∆x

For the linearized function, we merely replace the approximation symbol by an equality:

∆y = 3∆x.

Notice that

∆y = y − y0 , ∆x = x − x0 .

So the linearized function approximates the nonlinear one in the neighbourhood of the point

where the derivative is evaluated. Obviously, this approximation gets better and better as

|∆x| gets smaller and smaller.

2. Extension. The method extends to a function y = f (x), where x and y are vectors. Then the

derivative is the Jacobian matrix, which we shall denote by f 0 (x0 ).

∆y = A∆x,

where A equals the Jacobian of f at the point x0 . The element in the ith row and j th column

is the partial derivative ∂fi /∂xj , where fi is the ith element of f (i.e., yi ). So we have

0 x2 x1 0

f (x0 ) =

−2x3 0 2x3 − 2x1 x

0

−1 1 0

= .

−4 0 2

0 −1 1 0

A = f (x0 ) = .

−4 0 2

in Rn . Assume f is continuously differentiable at the point x0 . Then the linearization of the

equation y = f (x) at the point x0 is the equation ∆y = A∆x, where A = f 0 (x0 ), the Jacobian

of f at x0 . The variables are related by

We will find it useful to relate the block diagram for the equation y = f (x) and the block

diagram for the equation ∆y = A∆x. These are shown in Figure 2.30. Because x0 and f (x0 )

are known, we should regard the block diagram on the right as having just the input x and

just the output y.

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 29

x y x x y y

f A

x0 f (x0 )

5. Extension. Now we consider the case y = f (x, u), where u, x, y are all vectors, of dimensions

m, n, p, respectively. Wewant to linearize at the point (x0 , u0 ). We can combine x and u

x

into one vector v = of dimension n + m. Then we have the situation in the preceding

u

example, y = f (v). The Jacobian of f is an n × (n + m) matrix. Define

x0

A B = f 0 (v0 ).

v0 = ,

u0

In this way we get the linearization

∆y = f 0 (v0 )∆v

∆x

= A B

∆u

= A∆x + B∆u.

ẋ = f (x, u).

First, assume there is an equilibrium point, that is, a constant solution x(t) ≡ x0 , u(t) ≡ u0 .

This is equivalent to saying that 0 = f (x0 , u0 ). Now consider a nearby solution:

x(t) = x0 + ∆x(t), u(t) = u0 + ∆u(t).

We have

ẋ(t) = f [x(t), u(t)]

= f (x0 , u0 ) + A∆x(t) + B∆u(t) + higher order terms,

where

= f 0 (x0 , u0 ).

A B

˙ and f (x0 , u0 ) = 0, we have the linearized equation to be

Since ẋ = ∆x

˙ = A∆x + B∆u.

∆x

Similarly, the output equation y = h(x, u) linearizes to

∆y = C∆x + D∆u,

where

= h0 (x0 , u0 ).

C D

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 30

7. Summary. To linearize the system ẋ = f (x, u), y = h(x, u), select, if one exists, an equilib-

rium point, that is, constant vectors x0 , u0 such that f (x0 , u0 ) = 0. If the functions

f and

h

are continuously

differentiable at this equilibrium, compute the Jacobians A B and

C D of f and h at the equilibrium point. Then the linearized system is

∆x

This linearized system is a valid approximation of the nonlinear one in a sufficiently small

neighbourhood of the equilibrium point. How small a neighbourhood? There is no simple

answer.

x30 = 0

x40 = 0

u0 + M2 Lx240 sin x20 − M2 g sin x20 cos x20 = 0

−u0 cos x20 − M2 Lx240 sin x20 cos x20 + (M1 + M2 )g sin x20 = 0.

Multiply the third equation by cos x20 and add to the fourth:

The right-hand factor is positive; it follows that sin x20 = 0 and therefore x20 equals 0 or

π, that is, the pendulum is straight down or straight up. Thus the equilibrium points are

described by

x0 = (arbitrary, 0 or π, 0, 0), u0 = 0.

We have to choose x20 = 0 (pendulum up) or x20 = π (pendulum down). Let us take x20 = 0.

Then the Jacobian computes to

0 0 1 0

0

0 0 0 1

0

A= , B = .

1

M2

0

− M1 g 0 0

M1

1

M1 +M2 g − LM

0 M1 L 0 0 1

We just applied the general method of linearizing. For this example, there’s actually a faster

way, which is to approximate sin θ = θ, cos θ = 1 in the original equations.

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 31

Frequently, a system is made up of components connected together in some arrangement. This

raises the question, if we have state models for components, how can we assemble them into a state

model for the overall system?

1. Review. This section involves some matrix algebra. Let us summarize what you need to know.

If we have two vectors, x1 and x2 , of dimensions n1 and n2 , we can stack them as a vector of

dimension n1 + n2 . Of course, we can stack them either way:

x1 x2

or .

x2 x1

ẋ1 = A1 x1 + B1 u1

ẋ2 = A2 x2 + B2 u2

ẋ1 A1 0 x1 B1 0 u1

= + .

ẋ2 0 A2 x2 0 B2 u2

Be careful: If you stack x2 above x1 , the matrices will be different. In the matrix

A1 0

0 A2

the two zeros are themselves matrices of all zeros. Usually, we don’t care how many rows or

columns they have, but actually their sizes can be deduced. For example, if x1 has dimension

n1 and x2 has dimension n2 , then the sizes of zero blocks must be as shown here:

A1 0n1 ×n2

.

0n2 ×n1 A2

This is because the upper-right zero must have the same number of rows as A1 and the same

number of columns as A2 ; likewise for the lower-left zero.

Finally, multiplication of a block vector by a block matrix, assuming the dimensions are

correct, works just as if the blocks were 1 × 1. For example, in the product

A1 0 x1

0 A2 x2

A1 0 x1 A1 x 1

= .

0 A2 x2 A2 x 2

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 32

u A B y

C D

u A1 B1 y1 A2 B2 y

C1 D1 C2 D2

2. Notation. There is a very handy way to embed a state model into a block diagram. Suppose

we have the state model

ẋ = Ax + Bu (2.8)

y = Cx + Du. (2.9)

The input is u, the output is y, and x is the state. A block diagram for this component has u

on the input arrow and y on the output arrow. The system in the box is modeled by the state

equations. It is convenient to encode these equations into the block diagram as in Figure 2.31.

A B

The symbol in the block is just an abbreviation for equations (2.8), (2.9).

C D

3. Example, series connection. Figure 2.32 shows a series connection of two subsystems. We

want to get a state model from u to y. Write the state equations for the two blocks:

ẋ1 = A1 x1 + B1 u

ẋ2 = A2 x2 + B2 y1 .

x1

x= .

x2

ẋ1 A1 0 x1 B1 0

= + u+ y1 .

ẋ2 0 A2 x2 0 B2

The intermediate signal y1 , being the output of the left-hand block, equals C1 x1 + D1 u, or,

in terms of the combined state and u,

x1

y1 = C1 0 + D1 u. (2.10)

x2

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 33

A1 0 B1

u y

B2 C 1 A2 B2 D 1

D2 C1 C2 D2 D1

r e A1 B1 u A2 B2 y

C1 D1 C2 D2

−

ẋ1 A1 0 x1 B1

= + u.

ẋ2 B 2 C 1 A2 x2 B2 D1

y = C2 x2 + D2 y1

x1

= 0 C2 + D2 y1 .

x2

x1

y = D 2 C1 C2 + D2 D1 u.

x2

ẋ = Ax + Bu, y = Cx + Du,

where

A1 0 B1

A= , B=

B 2 C 1 A2 B2 D1

C= D 2 C1 C2 , D = D2 D1 .

We conclude that the block diagram Figure 2.32 can be simplified to Figure 2.33.

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 34

4. Example, feedback connection. Figure 2.34 shows a feedback arrangement. Feedback is intro-

duced later in the book, but here we merely want to do some manipulations with state models

and block diagrams. Specifically, we want to derive a state model from r to y. The derivation

is simpler under the assumption D2 = 0, so we make that assumption. The block diagram

has two blocks. Write the state equations for the two blocks:

ẋ1 = A1 x1 + B1 e

ẋ2 = A2 x2 + B2 u.

Combine:

ẋ1 A1 0 x1 B1 0 e

= + . (2.11)

ẋ2 0 A2 x2 0 B2 u

e = r − C2 x2

u = C1 x1 + D1 e

= C1 x1 + D1 r − D1 C2 x2 .

Combine:

e 0 −C2 x1 I

= + r.

u C1 −D1 C2 x2 D1

ẋ1 A1 −B1 C2 x1 B1

= + r.

ẋ2 B 2 C 1 A2 − B 2 D 1 C 2 x2 B2 D1

x1

y = C2 x 2 = 0 C2 .

x2

ẋ = Ax + Br, y = Cx

where

A1 −B1 C2 B1

A= , B= , C= 0 C2 .

B 2 C 1 A2 − B 2 D 1 C 2 B2 D1

We conclude that the block diagram Figure 2.34 can be simplified to Figure 2.35.

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 35

A1 B1 C 2 B1

r B2 C 1 A2 B2 D 1 C2 B2 D 1

y

0 C2 0

The title of this section is a quote from the artist Giorgio Morandi.

Mathematical models of physical things are approximations of reality. Consider, for example,

a real swinging pendulum and a mathematical model of it. The mathematical model involves a

parameter L, the length, whereas the real pendulum does not have a real length at the subatomic

scale. The length L is an attribute of an idealized pendulum. Going past this issue of what L

is, the mathematical model assumes perfect rigidity of the pendulum. From what we know about

reality, nothing is perfectly rigid; that is, rigidity is a concept, an approximation to reality. So if we

wanted to make our model “closer to reality,” we could allow some elasticity by adopting a partial

differential equation model and we may thereby have a better approximation. But no model is real.

There could not be a sequence M0 , M1 , . . . of models that are better and better approximations of

reality and such that Mk converges to reality. If Mk does indeed converge, the limit is a model, and

no model is real.

The only sensible question is, what do we mean by a “good model,” or, if we have two models,

how can we say which is better? We can test our model against the real thing. That is, we can

do several tests on the real thing, perform the same test on the model, and compare the resulting

measured data with the simulated data. If the two sets of data are close, and if the measuring

instruments are reasonably accurate, then we can say that the model is quite good.

For more along these lines, see the article “What’s bad about this habit,” N. D. Mermin, Physics

Today, May 2009, pages 8, 9.

2.7 Problems

1. Consider an electric circuit consisting of, in series, a voltage source supplying u(t) volts, a

resistor, an inductor, and a battery of 10 V. Take the state to be the current. Find the state

equation ẋ = f (x, u). Find all equilibria and linearize about one of them. Hint: The circuit

would be linear were it not for the battery.

2. Let x and y be vectors and A a matrix. Consider the block diagram inFigure 2.36.

According

to the diagram, the matrix A must partition into two blocks: A = A1 A2 . Then y =

A1 x + A2 y. When does this equation define a function from x to y?

Solution The equation y = A1 x + A2 y defines a function from x to y if and only if for every

x there exists a unique y satisfying (I − A2 )y = A1 x. For the vector y to be unique, the

matrix I − A2 must be invertible. In this case, y = (I − A2 )−1 A2 x. This equation defines a

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 36

x y

A

r v u y

eyes brain muscles e-bike

function from x to y. Thus the answer is, Figure 2.36 defines a function if and only if I − A2

is invertible (equivalently, 1 is not an eigenvalue of A2 ).

3. Imagine you are riding an e-bike at constant speed along a street. You are steering the e-bike

to control the direction you are going. Draw the block diagram.

Solution The e-bike, with you sitting on it, is a system component with input u, your force

turning the handlebars, and output y, the direction of motion of the e-bike. The force u comes

from your arm muscles, with input v an electrical voltage generated by your brain. The input

to your brain is the error as seen by your eyes, the error between the desired direction r you

want to go and the actual direction. The resulting block diagram is shown in Figure 2.37.

4. Modeling a bicycle is harder than you might think. Imagine a rider on a bike and the bike on

the road. Take the overall output to be the bicycle position (in an (x, y)-plane). What’s the

overall input? The rider can apply forces to the pedals, so they are inputs; so is the torque

applied to the steering wheel; and so is a leaning torque applied by the rider’s muscles. Try

getting a block diagram where the bicycle is one block and the person another, and there may

be more.

5. There are two cars and a road. One car is to be driven by a person along the road continuously

in one direction. The second car is required to follow the first at a fixed distance, but without a

human driver. Suppose the second car has a camera that can see the first car, some mechanism

to steer and speed up and slow down, and a computer with a program in it. The program

computes a rule to steer and to speed up or slow down accordingly. The system may work

well or not—we’re not interested in that aspect. Draw a block diagram of this system.

6. Suppose you have a car with a GPS navigation system. The system has a screen showing a

map and there’s an arrow on the map showing where your car is. As the car moves, the map

evolves so that the arrow stays in the middle of the screen. Draw a block diagram of this

setup.

7. Consider a force-feedback joystick connected to a laptop, with a person applying a force to

the joystick. Suppose the laptop is connected to another laptop through the Internet. This

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 37

v x y

g f

x y

f

second laptop is in a loop controlling a cart. Finally, the cart may bump into an obstacle. This

is a telerobotic architecture: The first laptop and the joystick are the master manipulator,

the second laptop and the cart are the slave. Let’s say the system should have the following

capabilities: When the person applies a force to the joystick, the remote cart should move

appropriately; when the cart hits the obstacle, the force should be reflected back to the person.

Let us first model the joystick. It is a DC motor and the relevant variables are the voltage

to the field windings, the torque that is generated by the magnetic field and applied to the

shaft, the torque applied to the shaft by the person, and the shaft angle. Continue modeling

in this way and get a block diagram.

then the block diagram is as shown in Figure 2.38. Here v is the overall input and y the overall

output of the system composed of f and g combined in the order shown. So far we haven’t

said what v, x, y represent. For example, they could be real numbers, in which case f and g

are functions R −→ R. Give examples of nonlinear f, g and find the system from v to y.

Solution Let f be defined by y = x2 + 1 and g be defined by x = sin(v). Then

y = x2 + 1 = sin2 v + 1.

9. Figure 2.39 is more interesting system. Obviously this represents the equation y = f (x, y).

That is, f is a function of two variables, and we’ve attempted to define a new function by

the equation y = f (x, y). For the block diagram to be well-defined, that is, to represent a

function, it must be true that for every x there exists a unique y such that y = f (x, y). Give

an example of f where the block diagram defines a function and one where it does not.

Solution Let f (x, y) = 2x − 3y. Then we can solve y = f (x, y) to get y = x/2. On the other

hand, if we take f to be f (x, y) = x + y, then the equation y = f (x, y) becomes y = x + y,

which is not solvable for y.

10. Why does the block diagram in Figure 2.40 not define a function?

Solution There is no input, i.e., independent variable.

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 38

x y

f

f

11. A baseball is thrown and we want to model the ball’s motion while in flight. Drawing a block

diagram means determining what the variables are and how they depend on each other. Let

us neglect the ball’s rotation and think of it as a point mass. One variable is therefore the

position of the ball, with respect to, say, a coordinate system fixed to the earth. Let us denote

position at time t by p(t), a three-dimensional vector. What does p(t) depend on? The force

of gravity, but that is fixed, not a variable, and therefore not an input. The position of the

ball depends obviously on the force with which it was thrown, where it was thrown from, and

what direction it was thrown. Equivalently, p(t) depends on the initial position and velocity,

p(0), ṗ(0), and the time t. Thus the block diagram is Figure 2.41. The input is a vector of 7

real numbers and the output is a vector of 3 real numbers. The equation is

Solution The function f depends only on the mass of the ball and the gravity constant g.

12. Consider two carts connected by a spring. A force u is applied to one of the carts. Let the

positions of the carts be y1 , y2 and suppose we designate y2 to be the overall output. A

common way to model this is via free-body diagrams and Newton’s second law. Letting v

denote the force applied to cart 1 via the spring, we get equations

M1 ÿ1 = u − v

M2 ÿ2 = v

v = K(y1 − y2 ).

Let us assume zero initial conditions: y1 (0), ẏ1 (0), y2 (0), ẏ2 (0) all zero. Then the equations

have the general form

y1 = f1 (u, v)

y2 = f2 (v)

v = f3 (y1 , y2 ).

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 39

u v

f1 f2 f3

y1 y2

x1 x2

u D

M1 M2

Solution See Figure 2.42.

13. Suppose f is a linear function from R2 to R3 , that is, 2 inputs and 3 outputs. Given f , how

could you get the matrix A such that f (x) = Ax? Hint: Apply an input so that the output

equals the first column of A.

Solution Apply the input (1, 0) Suppose the output equals (a11 , a21 , a31 ). Next, apply the

input (0, 1) and suppose the output equals (a12 , a22 , a32 ). Then

a11 a12

A = a21 a22 .

a31 a32

14. Consider Figure 2.43 showing two carts and a dashpot. (Recall that a dashpot is like a

spring except the force is proportional to the derivative of the change in length; D is the

proportionality constant.) The input is the force u and the positions of the carts are x1 , x2 .

The other state variables are x3 = ẋ1 , x4 = ẋ2 . Take M1 = 1, M2 = 1/2, D = 1. Derive the

matrices A, B in the state model ẋ = Ax + Bu,

Solution

0 0 1 0 0

0 0 0 1 0

A= , B=

0 0 −1 1 1

0 0 2 −2 0

15. This problem concerns a beam balanced on a fulcrum. The angle of tilt of the beam is denoted

α(t); a torque, denoted τ (t), is applied to the beam; finally, a ball rolls on the beam at distance

d(t) from the fulcrum. Introduce the parameters:

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 40

Jb moment of inertia of the ball

R radius of the ball

M mass of the ball.

The equations of motion are given to you as

Jb

+ M d¨ + M g sin α − M dα̇2 = 0

R2

(M d2 + J + Jb )α̈ + 2M dd˙α̇ + M gd cos α = τ.

Put this into the form of a nonlinear state model with input τ .

Solution We can take the state to be

˙

x = (x1 , x2 , x3 , x4 ) = (α, α̇, d, d).

The input is u = τ . The state equation is ẋ = f (x, u), where

x2

1

M x2 +J+J (−2M x2 x3 x4 − M gx3 cos x1 + u)

3 b

f (x, u) = .

x4

1

x1 + M x22 x3

Jb −M g sin

+M

R2

16. Continue with the same ball-and-beam problem. Find all equilibrium points. Linearize the

state equation about the equilibrium point where α = d = 0.

Solution The equilibrium equation is f (x, u) = 0, that is,

x2 = 0

M gx3 cos x1 = u

x4 = 0

sin x1 = 0.

Thus the equilibrium points are

x = (0, 0, d, 0), u = M gd, d arbitrary

and

x = (π, 0, d, 0), u = −M gd, d arbitrary.

Now we are asked to linearize at

x = (0, 0, 0, 0), u = 0.

The Jacobians are computed to be

0 1 0 0

0

Mg

0 0 − J+J b

0 1

A= J+Jb

, B= .

0 0 0 1 0

− JbM g 0 0 0 0

+M

R2

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 41

A1 B1 y1

C1 D1

u y

A2 B2

C2 D2 y2

f (x) = xT Ax + bT x f : Rn −→ R,

Solution The linearized equation is ∆y = f 0 (x0 )∆x, that is,

∆y = (2xT0 A + bT )∆x.

Solution The state equation is

1 c√

ẋ = f (x, u, d) = − xu.

A A

√

The equilibria are those values (x0 , u0 , d0 ) satisfying d0 = c x0 u0 . The linearized equation is

√

˙ = − cu 0 c x0

∆x √ ∆x − ∆u.

2A x0 A

19. Connect two subsystems in parallel as in Figure 2.44. Find the state model of the combined

system.

Solution If we take the state vector to be x = (x1 , x2 ), then

A1 0 B1

0 A2 B2 .

C1 C2 D 1 + D 2

20. A propos of Section 2.6, comment on the following quote and statements:

(a) “I took a tenth-order model, but the real flexible beam is infinite dimensional.”

CHAPTER 2. MATHEMATICAL MODELS OF PHYSICAL SYSTEMS 42

(b) From Maxwell’s equations, the model of a transmission line is a partial differential equa-

tion.

(c) Every real system is actually nonlinear.

Solution

(a) The statement was once made with the justification that a real beam has infinitely many

molecules. But the statement is illogical, since dimension is a mathematical concept and

a real system does not have a dimension.

(b) It is correct that Maxwell’s equations lead to a PDE model of a transmission line. How-

ever there is no unique model of a transmission line, since a PDE can be reduced to any

number of lumped models.

(c) Strictly speaking, this statement is illogical, since linearity is a mathematical concept.

However, it is frequently used loosely, for example, to reflect the fact that a real system

will saturate if an input is applied of too high amplitude.

Chapter 3

We have seen that the systems of interest for us are modelled by linear constant-coefficient differ-

ential equations. We have also seen how to combine several coupled differential equations into the

form

ẋ = Ax + Bu

y = Cx + Du,

where u, x, y are vectors. The next step is to transform this from the time domain to the frequency

domain. The one-sided Laplace transform is the fundamental tool used in control engineering to do

this transformation. We will arrive at the model

where U (s) and Y (s) are the Laplace transforms of, respectively, the input u(t) and the output

y(t), and G(s) is the so-called transfer function, or transfer matrix if the dimension of U (s) or Y (s)

is greater than 1. The function G(s) is determined by the matrices A, B, C, D.

It is interesting that the subject of communication systems uses the Fourier transform and the

subject of control systems uses the one-sided Laplace transform. The systems in the two subjects

are quite similar—they are linear time-invariant. However communication systems are stable while

the plants in control systems are frequently not. By definition, velocity is the derivative of position,

and this leads to an integrator in the model. As a dynamical system, an integrator is unstable.

Regarding signals, in communication theory signals are modeled as being stationary. By contrast,

in control systems commands are abrupt.

In this chapter we go over the definition of Laplace transform, the conditions for existence, how

pole locations reflect qualitative behaviour in the time domain, the final-value theorem, transfer

functions, and stability.

1. Setup. Let f (t) be a continuous-time function. The time variable t can range over all time,

−∞ < t < ∞, or perhaps only over all non-negative time, 0 ≤ t < ∞. We assume f (t) is

real-valued, which is typically true in applications, where f (t) is a voltage, velocity, or some

other physical variable. We make two additional assumptions about f .

43

CHAPTER 3. THE LAPLACE TRANSFORM 44

2. Piecewise continuous. The first assumption is piecewise continuous for t ≥ 0. This means

that f is continuous except possibly at a countable number of times 0 = t0 < t1 < . . . . The

widths of the intervals [tk , tk+1 ] must be such that tk+1 − tk ≥ b for some positive b and all k;

that is, we don’t allow f (t) to have infinitely many jumps during a finite time interval. For

example, every sinusoid is continuous and therefore piecewise continuous, a square wave is

not continuous but is piecewise continuous, and the blowing-up exponential et is continuous

and therefore piecewise continuous. On the other hand, this function is not, because it has

infinitely many jumps in finite time:

0, if t is a rational number

f (t) =

1, if t is not a rational number.

means that if it blows up, there is some exponential that blows up faster. That is,

|f (t)| ≤ M eat (3.1)

for some M ≥ 0 and some a ∈ R and all t > 0. For example f (t) = et is exponentially

2

bounded but f (t) = e(t ) is not. The signal

1

, t 6= 1

f (t) = t−1 ,

0, t=1

which blows up in finite time, is not exponentially bounded.

4. The Laplace integral. For a function satisfying these two conditions, its Laplace transform is

then defined to be

Z ∞

F (s) = f (t)e−st dt,

0

where s is a complex variable. The function f (t) may or may not be zero for negative time—it

doesn’t matter, because the Laplace transform is one-sided and ignores the values of f (t) for

t < 0. The piecewise continuity assumption implies that the Riemann integral

Z T

f (t)e−st dt

0

exists for every finite T , and then the exponential boundedness assumption implies that the

limit

Z T

lim f (t)e−st dt

T →∞ 0

exists if the real part of s is sufficiently large. To see this latter point, letting Re s denote

“real part of s” and assuming (3.1), we have

Z T Z T

f (t)e−st dt ≤ M eat e−st dt

0 0

Z T

=M eat e−(Re s)t dt

0

Z T

=M e−((Re s)−a)t dt.

0

CHAPTER 3. THE LAPLACE TRANSFORM 45

Figure 3.2: The ROC for the Laplace transform of the unit step.

Z T Z T

f (t)e−st dt ≤ M e−αt dt

0 0

M

1 − e−αT

=

α

M

≤ .

α

Thus the limit

Z T

f (t)e−st dt

lim

T →∞ 0

Z T

lim f (t)e−st dt.

T →∞ 0

We conclude that F (s) exists provided (Re s) − a > 0, that is, Re s > a. Thus the region of

convergence (ROC) of the Laplace integral is an open right half-plane, as shown in Figure 3.1.

5. Example. The unit step f (t) = 1+ (t). The smallest value of a such that (3.1) holds for some

M is a = 0. The ROC, shown in Figure 3.2, is the open right half-plane Re s > 0. For s

CHAPTER 3. THE LAPLACE TRANSFORM 46

Z ∞

F (s) = f (t)e−st dt

0

Z ∞

= e−st dt

0

1 −st 1 −st

= − e − − e

s t=∞ s t=0

1

= .

s

Thus the Laplace transform of the unit step is 1/s and the ROC is Re s > 0. The ROC is an

open right half-plane and the Laplace transform has a pole on the boundary of this half-plane,

at s = 0. This pole is marked by an × in Figure 3.2. Observe that the constant function

f (t) = 1, for all t, has the same Laplace transform as the unit step.

6. General remarks about the ROC. The ROC is an open right half-plane. Open means that it

does not contain its boundary. There are no poles inside the ROC. If the ROC is not equal

to the entire complex plane, there is a pole on the boundary of the ROC. For example, if we

know that

1

F (s) = ,

(s2 + 1)(2s − 1)

then we know that the ROC is Re s > 12 , because the poles are at ±j, 12 . If we had thought

that the ROC was to the right of the imaginary poles, we would have been wrong because

then the pole at 12 would have been inside the ROC. So, given F (s), to find the ROC simply

locate all the poles, draw a vertical line through the pole (or poles) that is farthest to the

right; then the ROC is to the right of this vertical line. In this way the ROC of a Laplace

integral is uniquely determined by the Laplace transform F (s) itself, and therefore we don’t

need to record the ROC along with the function F (s).

7. Additional point. For the unit step, the Laplace integral converges if and only if s lies in

the ROC, the open right half-plane Re s > 0. But the function 1/s itself is well defined for

every s except s = 0. Therefore, once we have the Laplace transform F (s) we can, if we need

to, permit s to take on values outside the ROC (except at poles). It’s just that the Laplace

integral won’t converge for such s. This remark will later give justification for drawing the

Bode plot of 1/(s − 1): The Bode plot is determined by taking s = jω, whereas this s, on the

imaginary axis, lies outside the ROC.

8. Example. A blowing-up exponential. Let f (t) = e2t . Clearly f (t) satisfies (3.1) for M = 1,

a = 2. The ROC is thus Re s > 2. For such s we compute that

Z ∞

F (s) = e2t e−st dt

Z0 ∞

= e−(s−2)st dt

0

1

= .

s−2

CHAPTER 3. THE LAPLACE TRANSFORM 47

9. Example. A sinusoid. Let f (t) = sin(ωt), where ω is a positive constant frequency. Obviously,

|f (t)| ≤ 1 for all t ≥ 0, and therefore f (t) satisfies (3.1) for M = 1, a = 0. The ROC is thus

Re s > 0. To compute F (s) it is convenient to use Euler’s formula:

This gives

1 jωt

e − e−jωt .

f (t) = sin(ωt) =

2j

Then

∞

1 jωt

Z

e − e−jωt e−st dt

F (s) =

0 2j

Z ∞

1

= e(jω−s)t − e−(jω+s)t dt

2j 0

1 1 1

=− +

2j jω − s jω + s

ω

= 2 .

s + ω2

10. Example. A derivative. Regarding the Laplace transform of f˙(t), it is convenient to assume

f (t) does not have a jump at t = 0, for otherwise f (t) is not differentiable at t = 0. Integrating

by parts, we have

Z ∞ Z ∞

˙ −st −st t=∞

f (t)se−st dt

f (t)e dt = f (t)e t=0

+

0 0

= −f (0) + sF (s).

In particular, if the initial value of f (t) equals 0, i.e., f (0) = 0, then differentiating in the time

domain corresponds to multiplying by s in the s-domain.

11. Table. On the next page is a short table of Laplace transforms. Longer ones can be found on

the Web. You should derive some of the entries. Euler’s formula is very helpful. For example

1 (a+jω)t

eat sin ωt = e − e(a−jω)t .

2j

CHAPTER 3. THE LAPLACE TRANSFORM 48

f (t) F (s)

1

1+ (t) unit step

s

1

eat

s−a

c1 f1 (t) + c2 f2 (t) c1 F1 (s) + c2 F2 (s) linearity

f (t) ∗ g(t) F (s)G(s) convolution

n!

tn

sn+1

ω

sin ωt

s2 + ω 2

s

cos ωt

s + ω2

2

ω

eat sin ωt

(s − a)2 + ω 2

s−a

eat cos ωt

(s − a)2 + ω 2

2ωs

t sin ωt

(s2 + ω 2 )2

s2 − ω 2

t cos ωt

(s2 + ω 2 )2

12. Inversion. The inversion problem is to find f (t) given F (s). This is normally done using a

3s + 17

table. For example, given F (s) = 2 , let us find f (t). We use partial fraction expansion

s −4

to get terms that are in the table:

c1 c2 23 11

F (s) = + , c1 = , c2 = − .

s−2 s+2 4 4

Then we invert F (s) term by term using linearity of the Laplace transform (Section 3.2):

23 2t 11 −2t

f (t) = c1 e2t + c2 e−2t = e − e

4 4

We do not know the value of f (t) for t < 0.

CHAPTER 3. THE LAPLACE TRANSFORM 49

f (t)

t

Figure 3.3: The signal ramps up to a constant value.

13. Impulse. You may have noticed that the table does not contain the unit impulse δ(t). We have

elected not to include it for two reasons. First, it requires special handling. The impulse is

neither piecewise continuous nor exponentially bounded. In fact, it is not even a real function

because ∞ is not a number. There is a rigorous way to handle δ(t), but it is beyond the scope

of this book. And without the rigorous framework it is not possible to answer, for example,

if δ(t)1+ (t) is defined, if δ(t)2 is defined, if δ̇(t) is defined, if the derivative of the step equals

the impulse, and so on. The second reason for not including δ(t) is that it is not important

in control engineering. The impulse is not a physical signal.

1. Linearity. The Laplace transform maps a class of time-domain functions f (t) into a class

of complex-valued functions F (s). The mapping f (t) 7→ F (s) is linear, that is, the Laplace

transform of a1 f1 (t) + a2 f2 (t) equals

a1 F1 (s) + a2 F2 (s).

2. Example. Let f (t) denote the signal in Figure 3.3 that ramps up to the constant 1 at time

t = 1. We shall use linearity to find the Laplace transform of this signal. We have f = f1 + f2 ,

where f1 (t) = t, the unit ramp starting at time 0, and f2 (t) is the ramp of slope −1 starting

at time 1:

0, 0≤t≤1

f2 (t) =

−f1 (t − 1), t > 1.

CHAPTER 3. THE LAPLACE TRANSFORM 50

By linearity, F (s) = F1 (s) + F2 (s). From the table or direct computation F1 (s) = 1/s2 . Also

Z ∞

F2 (s) = f2 (t)e−st dt

0

Z ∞

=− f1 (t − 1)e−st dt

0

Z ∞

=− f1 (t − 1)e−st dt

Z1 ∞

=− f1 (τ )e−s(1+τ ) dτ

0

= −e−s F1 (s)

1

= −e−s 2 .

s

Thus

1 − e−s

F (s) = .

s2

3. Superposition. A special case of linearity is that the Laplace transform of a sum equals the

sum of the Laplace transforms: If

then

f (t) = g(t)h(t),

F (s) = G(s)H(s)?

Certainly not. Rather, multiplication in the s-domain corresponds to convolution in the time

domain. For this treatment of convolution we shall assume that g(t) and h(t) equal zero for

t < 0. Such signals are said to be causal. The definition of the convolution of g(t) and h(t)

is

Z ∞

f (t) = g(t − τ )h(τ )dτ.

−∞

We see that f (t) is causal too. The conventional notation for convolution is

Actually, this notation is incorrect. It suggests that f at time t is obtained from g and h at

time t. This is obviously false: f at time t depends on g and h over their whole domains

of definition. The correct way to write convolution is f = g ∗ h, or, if you want to show t,

f (t) = (g ∗ h)(t). However, we shall stick to common practice and write f (t) = g(t) ∗ h(t).

CHAPTER 3. THE LAPLACE TRANSFORM 51

R∞

5. Theorem. If g(t) and h(t) are causal and f (t) = −∞ g(t − τ )h(τ )dτ , then F (s) = G(s)H(s).

6. Proof. All the integrals in this proof range from −∞ to ∞ and hence we shall not write the

limits. We have

Z

F (s) = f (t)e−st dt.

Z Z

F (s) = g(t − τ )h(τ )dτ e−st dt.

Z Z

F (s) = g(t − τ )h(τ )e−st dtdτ.

Change t − τ to r:

Z Z

F (s) = g(r)h(τ )e−s(τ +r) drdτ.

Z Z

F (s) = g(r)e−sr drh(τ )e−sτ dτ.

The inner integral equals G(s), which can come outside the outer integral:

Z

F (s) = G(s) h(τ )e−sτ dτ.

Thus

F (s) = G(s)H(s).

In this section we study how the poles of F (s) affect the qualitative behaviour of f (t). We will do

this only for a special class of F (s), namely, rational functions. We begin by defining what those

are.

1. Example. We saw that the Laplace transform of the unit step f (t) = 1+ (t) is F (s) = 1/s.

This function of s is an example of a rational function, meaning it has a numerator and a

denominator, both of which are polynomials. Other examples of rational functions are

1 1 s

, , .

s s2 s2 +1

These are the Laplace transforms of, respectively,

1, t, cos t.

An example of a non-rational Laplace transform is e−s .1

1

Rational Laplace transforms where the numerator degree is not less than the denominator degree, such as 1 and

s, have impulses in the time domain. For example, the inverse Laplace transform of s is δ̇(t).

CHAPTER 3. THE LAPLACE TRANSFORM 52

2. Poles. Let F (s) be rational. Its poles are the values of s that make the denominator equal

to zero. The locations of the poles give us information about the behaviour of f (t). Consider

for example the Laplace transform pair

1

f (t) = e−t , F (s) = , pole = −1.

s+1

We observe that a single negative pole corresponds to a decaying exponential in the time

domain. This can be illustrated by the sketch

which depicts on the left the pole location in the complex plane and on the right the graph of

f (t). Likewise, this sketch

depicts that a single positive pole corresponds to a blowing-up exponential in the time domain.

(a) A single real negative pole corresponds to a decaying exponential. The farther left the

pole is, the faster f (t) decays.

(b) A single pole at s = 0 corresponds to a constant in the time domain.

(c) A single real positive pole corresponds to an exponential that blows up. The farther

right the pole is, the faster f (t) blows up.

(d) A complex conjugate pair of poles with Re s < 0 corresponds to a sinusoid with decaying

amplitude.

(e) A complex conjugate pair of poles on the imaginary axis corresponds to a sinusoid with

constant amplitude.

(f) A complex conjugate pair of poles with Re s > 0 corresponds to a sinusoid with amplitude

that blows up.

Then there are the cases where the poles have higher multiplicity than one. We do three cases

for illustration:

CHAPTER 3. THE LAPLACE TRANSFORM 53

Example:

1

F (s) = =⇒ f (t) = te−t .

(s + 1)2

Example:

1

F (s) = =⇒ f (t) = tet .

(s − 1)2

(i) A complex conjugate pair of poles on the imaginary axis of multiplicity two corresponds

to a sinusoid with ramp-liket amplitude. Example:

1 1

F (s) = =⇒ f (t) = (sin t − t cos t) .

(s2 + 1) 2 2

4. Goodness. Because signals that blow up are (usually) unwanted, to a control engineer the left

half-plane is “good” and the right half-plane is “bad.”

5. Caution. As a final remark, it may have occurred to you that pole locations alone are a good

indicator of a signal’s behaviour. Caution is advised. For example, suppose F (s) has the

poles {−2 ± 10j, −10}. They may suggest severe oscillations in f (t). But the inverse Laplace

transform could actually be

In this section we continue our study of how pole locations affect qualitative behaviour in the time

domain. The questions in this section are, when is f (t) bounded and when does it have a final

value, that is, when does limt→∞ f (t) exist, and, if it does exist, what is the value of this limit? We

shall answer these questions for the case that F (s) is a rational function.

1. Bounded. A signal f (t) is bounded if |f (t)| ≤ M for some M and all t ≥ 0. For example,

1+ (t) and cos(t) are bounded but et is not. Unbounded signals are potentially harmful. For

example, an unbounded voltage in a circuit would eventually fry the electronics. On the

other hand, if a radio signal is sent from earth toward a remote star, its distance from earth

is (virtually) unbounded, but no harm is done. The final value of a signal f (t) may be of

importance in a control system. For example, f (t) may be an error signal, in which case its

final value is the steady-state error.

2. Terminology. Since F (s) is rational, its numerator, N (s), and its denominator, D(s), have

well-defined degrees. We say F (s) is

CHAPTER 3. THE LAPLACE TRANSFORM 54

and

We shall assume F (s) is strictly proper, for otherwise f (t) contains an impulse and/or its

derivative.

where G1 (s) has all the poles with negative real part, G2 (s) has all the poles only at s = 0,

and G3 (s) has all the other poles.

4. Example.

1 2

G1 (s) = +

s + 1 (s + 2)2 + 20

2 4

G2 (s) = + 2

s s

1 2

G3 (s) = 2 + .

s + 1 s − 10

From the table, the inverse Laplace transform is

2 √

g1 (t) = e−t + √ e−2t sin 20t

20

g2 (t) = 2 + 4t

g3 (t) = sin t + 2e10t .

Notice that g1 (t) is bounded and converges to 0 and therefore it has a final value; g2 (t) blows

up because there’s a repeated pole at s = 0; g3 (t) blows up. So for this example, f (t) is

unbounded and does not have a final value.

1 2

G1 (s) = +

s + 1 (s + 2)2 + 20

2

G2 (s) =

s

G3 (s) = 0.

Then f (t) is bounded and does have a final value, namely, 2. Notice that the pole of G2 (s) is

simple, i.e., the multiplicity is 1. This example surely leads you to believe the following two

facts.

CHAPTER 3. THE LAPLACE TRANSFORM 55

(b) If F (s) has no poles in Re s ≥ 0 except a simple pole at s = 0 and/or some simple

complex-conjugate pairs of poles on the imaginary axis, then F (s) is bounded.

(c) In all other cases f (t) is unbounded.

7. Fact. Characterization of having a final value. Suppose F (s) is rational and strictly proper.

(a) If F (s) has no poles in Re s ≥ 0, then f (t) has a final value, namely 0.

(b) If F (s) has no poles in Re s ≥ 0 except a simple pole at s = 0, then f (t) has a final

value, which equals lims→0 sF (s).

(c) In all other cases f (t) does not have a final value.

8. Be careful. Remember that you have to know that f (t) has a final value, by examining the

poles of F (s), before you can calculate lims→0 sF (s) and claim it is the final value. Many

students have been tricked by Problem 4.

We return to a system with a state model:

The system is linear and, because A, B, C, D are constant matrices, time invariant. The input is u

and the output is y. We are going to take Laplace transforms of these equations.

1. Extension to vectors. The signals u, x, y may be vectors. We define the Laplace transform of

a vector u(t) to be the vector of Laplace transforms. That is, if

u1 (t)

u(t) = ..

,

.

um (t)

then we define

U1 (s)

U (s) = ..

,

.

Um (s)

Likewise for X(s) and Y (s). From the Laplace transform table, the Laplace transform of

the derivative ẋ(t) equals sX(s) − x(0); this assumes sufficient smoothness. The transfer

function of the system (3.2) is the function G(s) satisfying Y (s) = G(s)U (s) when we take

Laplace transforms of (3.2) with x(0) = 0.

CHAPTER 3. THE LAPLACE TRANSFORM 56

0 1 0

A= , B= , C = 1 0 , D = 0.

1 0 1

Y (s) = CX(s). (3.4)

On the left-hand side of (3.3), X(s) is multiplied by the variable s, while on the right-hand

side X(s) is multiplied by the matrix A. To be able to get a common coefficient ofX(s) we

1 0

have to turn the coefficient s into a matrix. We do that via the 2 × 2 matrix I = .

0 1

Then sX(s) = sIX(s). Using this in (3.3) gives

s −1

.

−1 s

Recall that the inverse of a matrix equals the adjoint matrix divided by the determinant:

1

(sI − A)−1 = adj (sI − A). (3.6)

det(sI − A)

For this matrix, the adjoint equals

s 1

1 s

and the determinant equals s2 − 1. Thus we have found that sI − A has an inverse:

−1 1 s 1

(sI − A) = 2 .

s −1 1 s

1

X(s) = adj (sI − A) BU (s).

det(sI − A)

1

To get Y (s) we need to multiply X(s) by the matrix C. Since det(sI−A) is a scalar, it commutes

with C and we arrive at

1

Y (s) = C adj (sI − A) BU (s)

det(sI − A)

CHAPTER 3. THE LAPLACE TRANSFORM 57

and

1 1 s 1 0

C adj (sI − A) B = 2 1 0

det(sI − A) s −1 1 s 1

1

= 2 .

s −1

We have thus derived the transfer function from u to y:

1

Y (s) = G(s)U (s), G(s) = .

s2 −1

3. Example. An example use of this transfer function: Suppose the input is the unit step:

u(t) = 1+ (t). Then

1

Y (s) =

s(s2

− 1)

1

=

s(s + 1)(s − 1)

1 1/2 1/2

=− + +

s s−1 s+1

and so

1 1

y(t) = −1 + et + e−t , t ≥ 0.

2 2

To recap, in this example we took a state model and, for a step input, we found the output

using Laplace transforms. Notice that the transfer function has a pole in the right half-plane;

hence Y (s) does too; and hence y(t) does not have a final value.

4. Extension. Here the input and output are vectors. Suppose the four state matrices are

0 0 1 0 0 0

0 0 0 1

, B = 0 0 , C = 1 0 0 0 , D = 0.

A= −1 1 0 0 1 0 0 1 0 0

1 −1 0 0 0 1

s2 + 1

−1 1 1

G(s) = C(sI − A) B= 2 .

s2 (s2 + 2) 1 s +1

Since Y (s) = G(s)U (s), we call G(s) the transfer matrix from u to y. The dimension of both

u and y is 2 and consequently G(s) is a 2 × 2 matrix. Writing out the components of Y (s),

G(s), and U (s) gives

Y1 (s) G11 (s) G12 (s) U1 (s)

= .

Y2 (s) G21 (s) G22 (s) U2 (s)

The Author

CHAPTER 3. THE LAPLACE TRANSFORM 58

December 17, 2007

R

+

i C +

u y

−

−

ẋ = Ax + Bu, y = Cx + Du

6. State models. Consider (3.7) in the single input, single output case (G(s) is 1 × 1) and with

D = 0. Then G(s) has a numerator and a denominator and these are both polynomials. In

view of (3.6) we have

1

= C adj (sI − A) B

det(sI − A)

N (s)

= ,

D(s)

where

1

In rare pathological examples the two polynomials in (3.8) can have a common factor, which

rightly should be cancelled. This pathological case won’t occur in this course.

7. Generality. The method summarized in Paragraph 5 is general. However, some systems are

sufficiently simple that we can get the transfer function directly, without first getting a state

model.

dy

−u + Ri + y = 0, i = C .

dt

CHAPTER 3. THE LAPLACE TRANSFORM 59

Therefore

RC ẏ + y = u.

Y (s) 1

G(s) = = .

U (s) RCs + 1

1

Cs 1

G(s) = 1 = .

R+ Cs

RCs + 1

The product RC has units of seconds and is called the time constant of the circuit or of the

transfer function. The pole is at s = −1/RC and therefore the smaller is the time constant,

the farther left is the pole. The DC gain of the circuit is G(0) = 1. That is, if u(t) is a

constant voltage, then in steady state so is y(t), and y(t) = u(t). This is a lowpass circuit. If

we had taken the output to be the voltage drop across the resistor, then the transfer function

would have been

RCs

G(s) = .

RCs + 1

This is a highpass circuit.

(a) G(s) = 2 represents a pure gain. The output equals twice the input, for every input.

Rt

(b) G(s) = 1/s is the ideal integrator. The input and output are related by y(t) = −∞ u(τ )dτ .

(c) G(s) = 1/s2 is the double integrator.

(d) G(s) = s is the differentiator. The input and output are related by y(t) = u̇(t). A

differentiator can be at best an approximation to a real system. For example, if the

input is sin(ωt), then the output is ω cos(ωt). So as ω becomes larger and larger, the

output amplitude becomes larger and larger too. This cannot happen in a real circuit.

(e) G(s) = e−τ s with τ > 0 is a time delay system—note that the transfer function is not

rational.

(f)

ωn2

G(s) =

s2 + 2ζωn s + ωn2

is the standard second-order transfer function, where ωn > 0 is the natural frequency

and ζ ≥ 0 is the damping constant. An RLC circuit has this transfer function.

CHAPTER 3. THE LAPLACE TRANSFORM 60

(g)

K2

G(s) = K1 + + K3 s

s

is the transfer function of the proportional-integral-derivative (PID) controller. Notice

that G(s) is improper, since, if we put it in the form

K3 s2 + K1 s + K2

G(s) = ,

s

we see that the numerator has higher degree than the denominator. Again, G(s) can at

best be only an approximation to a real system. A better approximation might be

K2 K3 s

G(s) = K1 + + ,

s εs + 1

where ε is a small positive number.

10. Summary. General procedure for getting the transfer function of a system:

(a) Apply the laws of physics etc. to get differential equations governing the behaviour of

the system. Put these equations in state form. In general these are nonlinear.

(b) Find an equilibrium, if there is one. If there are several equilibria, you have to select

one. If there isn’t even one, this method doesn’t apply.

(c) Linearize about the equilibrium point.

(d) If the linearized system is time invariant, take Laplace transforms with zero initial state.

(e) Solve for the output Y (s) in terms of the input U (s).

In general G(s) is a matrix: If dim u = m and dim y = p (m inputs, p outputs), then G(s) is

p × m. In the single-input, single-output case, G(s) is a scalar-valued transfer function.

11. Realization. There is a converse problem to getting the transfer function and that is, given

a transfer function, to find a corresponding state model. That is, given G(s), find A, B, C, D

such that

The state model is called a realization of G(s). The state matrices are never unique: Each

G(s) has an infinite number of state models. But it is a fact that every proper, rational G(s)

has a state realization. Let us see how to do this in the single-input/ single-output case, where

G(s) is 1 × 1.

1

G(s) = .

2s2 −s+3

CHAPTER 3. THE LAPLACE TRANSFORM 61

Y (s) 1

= 2 ,

U (s) 2s − s + 3

or equivalently,

Now go back to the time domain. We know from the table that sY (s) maps to ẏ(t). Likewise

s2 Y (s) maps to ÿ(t). Thus the corresponding differential equation model is

2ÿ − ẏ + 3y = u.

ẋ1 = x2

1 3 1

ẋ2 = x2 − x1 + u

2 2 2

y = x1

and thus

0 1 0

A= , B= , C= 1 0 , D = 0.

−3/2 1/2 1/2

constant

G(s) = .

polynomial of degree n

s−2

G(s) = .

2s2−s+3

Introduce an auxiliary signal V (s):

1

Y (s) = (s − 2)V (s), V (s) = U (s).

2s2 −s+3

Then the transfer function from U to V has a constant numerator. We have

2v̈ − v̇ + 3v = u

y = v̇ − 2v.

Defining

x1 = v, x2 = v̇,

CHAPTER 3. THE LAPLACE TRANSFORM 62

we get

ẋ1 = x2

1 3 1

ẋ2 = x2 − x1 + u

2 2 2

y = x2 − 2x1

and so

0 1 0

A= , B= , C= −2 1 , D = 0.

−3/2 1/2 1

14. Proper but not strictly proper. If the numerator and denominator of G(s) have the same

degree, then we can divide the numerator by the denominator and get

G(s) = c + G1 (s),

where c is a constant and G1 (s) is strictly proper. In this case we get A, B, C to realize G1 (s),

and then just set D = c.

3.6 Stability

Stability is one of the most important concepts in this course. Everyone has an intuitive understand-

ing of what stability and instability mean. In recent years we have seen the governments of many

countries overthrown; if the constitution of a country is not widely accepted or the government does

not adhere to the rule of law, instability can result. If you have cancer, your body is subject to

rampant growth of cancer cells—this is a form of instability. If drivers on the highway drive too

closely to the cars ahead, the system is unstable, for if one car decelerates quickly, a pileup can

easily result. On the other hand, if you play a game of basketball, your heartrate will increase, but

when the game is over, assuming you are healthy, your heartrate will return to normal.

1. Intuitive definition. Very roughly, a system is stable if it has these two properties: It returns

to equilibrium after a perturbation of its state; the system can accommodate a persistent

disturbance.

2. Example. Figure 3.5 shows a cart attached to the wall by a spring and damper and possibly

subjected to wind gusts. If you do a free-body diagram of the cart, and account for the forces

through the spring and the damper, from Newton’s second law you will get

M ÿ = d − Ky − Dẏ.

ÿ = d − y − ẏ.

The Author

y

K

d M

D

Taking, as usual, the state variables x1 = y, x2 = ẏ and the state vector x = (x1 , x2 ), and

taking the output to be y, lead to the state model

ẋ = Ax + Ed

y = Cx

0 1 0

A= , E= , C= 1 0 .

−1 −1 1

There are two questions to be asked concerning stability for this system, which we treat one

at a time.

3. First question. In the absence of an input d, will x(t) converge to 0 as t goes to infinity for

every x(0)? From our study of pole locations we know how to answer this question. Take

Laplace transforms of ẋ = Ax. We get

Recall from (3.6) that the inverse of sI − A equals the adjoint of the matrix divided by the

determinant of the matrix. Thus for A as above 1

X1 (s) 1

= adj(sI − A) x(0)

X2 (s) det(sI − A)

1 s+1 1 x1 (0)

= 2

s +s+1 −1 s x2 (0)

and so

(s + 1)x1 (0) + x2 (0) −x1 (0) + sx2 (0)

X1 (s) = , X2 1(s) = .

s2 + s + 1 s2 + s + 1

We now apply Section 3.4, Paragraph 7. The zeros of s2 + s + 1 have negative real part. Thus

for every x(0), the final value of x(t) is zero.

CHAPTER 3. THE LAPLACE TRANSFORM 64

4. Second question. If d(t) is bounded, does it follow that y(t) is bounded too? That is, can a

bounded wind gust make the cart’s position become unbounded? It seems unlikely. For this

second question we may as well assume x(0) = 0 because we’ve addressed the initial condition

response in the first question. Again, we take Laplace transforms:

sX(s) = AX(s) + ED(s)

Y (s) = CX(s).

Solve for Y (s):

1

Y (s) = C adj (sI − A) ED(s)

det(sI − A)

1 s+1 1 0

= 2 1 0 D(s)

s +s+1 −1 s 1

1

= 2 D(s).

s +s+1

So the question reduces to, given

1

Y (s) = D(s),

s2 +s+1

if d(t) is bounded, does it follow that y(t) is bounded? The answer is yes, from Section 3.4,

Paragraph 6.

5. General definition of stability. For the system modeled by

ẋ = Ax + Bu

y = Cx + Du.

This system is defined to be stable provided,

with u = 0 and x(0) arbitrary, the final value of x(t) equals 0 (3.9)

and

with x(0) = 0, y(t) is bounded for every bounded u(t). (3.10)

Notice, first, that (3.9) holds if and only if the zeros of the polynomial det(sI − A) all have

negative real part, and, secondly, if (3.9) holds, then automatically so does (3.10).

6. Terminology. The preceding definition of stability is a little different from that in other books.

Our definition combines two properties, namely, (3.9) and (3.10). In other books, these two

properties are kept separate. Condition (3.9) is called asymptotic stability in the sense of

Lyapunov and condition (3.10) is called bounded-input, bounded-output stability. If you go

on to more advanced courses in control, in particular, a course in nonlinear control, you will

have to separate the two conditions.

7. More terminology. You may recall from linear algebra that the polynomial det(sI − A) is

called the characteristic polynomial of the matrix A. Also, the zeros of this polynomial

equal the eigenvalues of A. Using the term eigenvalues, we have the following simple, concise

theorem on stability.

The Author

December 18, 20

CHAPTER 3. THE LAPLACE TRANSFORM 65

orignasymp.stable

stable Decmber18,207

unstable

TheAuthor

BriefArticle

Brief Article

Brief Articleorigin stable

The Author The Author

Figure 3.6: An undamped mass/spring system is unstable.

December 18, 2007 December 18, 2007

stable unstable

ẋ = Ax + Bu

y = Cx + Du

is stable if and only if the eigenvalues of A all have negative real part.

because A = 0 and its eigenvalue is not negative. The system ẋ = x is unstable.

10. Examples by pictures. A cart in Figure 3.6 and a circuit in Figure 3.7.

11. The Routh-Hurwitz criterion. In practice, one checks feedback stability using MATLAB or

Scilab to calculate the eigenvalues of A or, equivalently, the roots of the characteristic poly-

nomial. However, it is sometimes useful, and also of historical interest, to have an easy test

for simple cases. Consider a general characteristic polynomial with real coefficients:

Let us say that p(s) is stable if all its zeros have Re s < 0. The Routh-Hurwitz criterion is an

1 zeros. Instead of studying

algebraic test for p(s) to be stable, without having to calculate the

the complete criterion, here are the results for n = 1, 2, 3:

(b) p(s) = s2 + a1 s + a0 : p(s) is stable iff a0 , a1 are both positive.

CHAPTER 3. THE LAPLACE TRANSFORM 66

R

+

u L i

−

y

d

(c) p(s) = s3 + a2 s2 + a1 s + a0 : p(s) is stable iff a0 , a1 , a2 are all positive and in addition

a1 a2 > a0 .

12. Magnetic levitation. Imagine an electromagnet suspending an iron ball—Figure 3.8. The ball

is subject to gravity. A current passes through a coil, a magnetic field is produced, and the

resulting magnetic force competes with the gravitational force on the ball. To balance the

ball, the magnetic force must be exactly right, neither too strong nor too weak. As you can

imagine, you could never adjust the voltage u by hand to balance the ball—feedback control

is required. The signals are these: u is the voltage applied to the electromagnet, i is the

current in the coil, y is the position down of the ball, and d is a possible disturbance force—

for example, we may like to tap the ball while it’s balanced. Let the input be the voltage u

and the output the position y of the ball below the magnet. Then

di

L + Ri = u.

dt

Also, it can be derived that the magnetic force on the ball has the form Ki2 /y 2 , K a constant.

Thus

i2

M ÿ = M g + d − K .

y2

Realistic numerical values are M = 0.1 Kg, R = 15 ohms, L = 0.5 H, K = 0.0001 Nm2 /A2 ,

g = 9.8 m/s2 . Substituting in these numbers gives the equations

di

0.5 + 15i = u

dt

d2 y i2

0.1 = 0.98 + d − 0.0001 .

dt2 y2

Define state variables x = (x1 , x2 , x3 ) = (i, y, ẏ). Then the nonlinear state model is ẋ =

f (x, u, d), where

CHAPTER 3. THE LAPLACE TRANSFORM 67

D

10

U 19.8 1 Y

s + 30 s2 1940

Suppose we want to stabilize the ball at y = 1 cm, or 0.01 m. We need a linear model valid in

the neighbourhood of that value. Solve for the equilibrium point (x0 , u0 , d0 ) where x20 = 0.01

and d0 = 0:

Thus

˙ = A∆x + B∆u + E∆d, ∆y = C∆x,

∆x

where A equals the Jacobian of f with respect to x, evaluated at (x0 , u0 , d0 ), B equals the

same except with respect to u, and E equals the same except with respect to d:

−30 0 0 −30 0 0

A= 0 0 1 = 0 0 1

2 2 3

−0.002x1 /x2 0.002x1 /x2 0 (x̄,ū) −19.8 1940 0

2 0

B = 0 , C = 0 1 0 , E = 0 .

0 10

The eigenvalues of A are −30, ±44.05, the units being s−1 . The corresponding time constants

are 1/30 = 0.033, 1/44.05 = 0.023 s. The first is the time constant of the electric circuit;

the second, the time constant of the magnetics. There are no complex conjugate eigenvalues

because there is no oscillatory motion in the open-loop system. Take Laplace transforms with

∆x(0) = 0:

19.8 10

=− 2

∆U (s) + 2 ∆D(s).

(s + 30)(s − 1940) s − 1940

The block diagram is shown in Figure 3.9. The polynomial det(sI −A) is (s+30)(s2 −1940) and

therefore, since there is a zero (of the polynomial, and hence a pole of the transfer function)

CHAPTER 3. THE LAPLACE TRANSFORM 68

D

10

U 19.8 1 Y

s + 30 s2 1940

Figure 3.10: Block diagram of magnetic levitation system with a feedback controller.

√

at 1940 in the right half-plane, the system is unstable. To stabilize, we could contemplate

using feedback, as in Figure 3.10. Can we design a controller (unknown box) so that the

resulting system is stable? Let us see if we can stabilize using a pure gain, that is, by making

the voltage variation ∆u(t) directly proportional to the ball’s position error ∆y(t). Take

∆u = F ∆y. Then

˙ = A∆x + B∆u + E∆d

∆x

= A∆x + BF ∆y + E∆d

= A∆x + BF C∆x + E∆d

= (A + BF C)∆x + E∆d

and thus the original matrix A has changed to the matrix A + BF C. Substituting in for

A, B, C we get

−30 0 0 2

A + BF C = 0 0 1 + 0 F 0 1 0

−19.8 1940 0 0

−30 2F 0

= 0 0 1 .

−19.8 1940 0

It follows from the Routh-Hurwitz test that this can never be a stable polynomial because the

coefficient of s is negative. So, unfortunately, by Paragraph 8 the magnetic levitation system

cannot be stabilized by a pure gain controller; a more complex controller is required. We shall

leave this example now and study feedback control in more depth.

CHAPTER 3. THE LAPLACE TRANSFORM 69

3.7 Problems

1. Sketch the function

t + 1, 0 ≤ t ≤ 10

f (t) =

−2et , t > 10

Solution Let

t + 1, 0 ≤ t ≤ 10 0, 0 ≤ t ≤ 10

f1 (t) = , f2 (t) = t .

0, t > 10 −2e , t > 10

so that f = f1 + f2 , and therefore F = F1 + F2 . Since both F1 (s) and F2 (s) must converge,

the ROC of F equals the intersection of the ROCs of F1 and F2 . We have

Z 10

F1 (s) = (t + 1)e−st dt

0

Z 10 Z 10

−st

= te dt + e−st dt.

0 0

! Z

1 −st 10

Z 10 10

1 −st

F1 (s) = − te + e dt + e−st dt

s 0 0 s 0

Z 10

10 1

= − e−10s + +1 e−st dt

s s 0

10 1 1

= − e−10s + +1 (1 − e−10s ).

s s s

Z ∞

F2 (s) = − 2e(1−s)t dt

10

10(1−s) 1

= 2e .

1−s

Thus

10 1 1 1

F (s) = − e−10s + +1 (1 − e−10s ) + 2e10(1−s) .

s s s 1−s

1

2. (a) Find the inverse Laplace transform of G(s) = .

2s2 + 1

1

(b) Repeat for G(s) = .

s2

CHAPTER 3. THE LAPLACE TRANSFORM 70

s

(c) Repeat for G(s) = .

2s2

+1

Solution These can be looked up from the table. For example, the inverse Laplace transform

of

1 0.5 0.5

G(s) = 2 = 2 = √ 2

2s + 1 s + 0.5 2

s + 0.5

√ √ √

is a sinusoid of frequency 0.5: g(t) = 0.5 sin 0.5t. The inverse Laplace transform of

1

G(s) = 2 is a ramp: g(t) = t.

s

3. Explain why we don’t use Laplace transforms for the system

ÿ(t) + 2tẏ(t) − y(t) = u(t).

Solution The Laplace transform of the product tẏ(t) is not equal to the product of the Laplace

transforms of t and ẏ(t), and therefore the Laplace transform technique does not convert the

differential equation to an algebraic equation.

4. Find the final value of f (t) when

s2 + 1

F (s) = .

s(s3 + s + 1)

s2 + 1

F (s) = ?

(s − 1)2 (2s + 3)

Solution The rightmost pole is at s = 1. Thus the ROC is the right half-plane to the right

of the point 1.

6. Consider a mass-spring system where M (t) is a known function of time. The equation of

motion in terms of force input u and position output y is

d

M ẏ = u − Ky

dt

(i.e., rate of change of momentum equals sum of forces), or equivalently

M ÿ + Ṁ ẏ + Ky = u.

This equation has time-dependent coefficients. So there’s no transfer function G(s), hence no

impulse-response function g(t), hence no convolution equation y = g ∗ u. Find a linear state

model.

Solution For a state model, take

x = (y, ẏ).

The state equation is

" #

0 1

0

ẋ = Ax + Bu, A= K , B= 1 .

−M − Ṁ

M M

CHAPTER 3. THE LAPLACE TRANSFORM 71

hand (from the state model) and by Scilab or MATLAB.

Solution

0 0 1 0 0

0 0 0 1 0

A= ,

1 ,

B= C= 1 0 0 0

0 0 −1 1

0 0 2 −2 0

The transfer function is

s2 + 2s

.

s4 + 3s3

The numerator and denominator have a common factor. When the common factor is cancelled,

the transfer function reduces to

s+2 s+2

= 2 .

s3 + 3s2 s (s + 3)

The following MATLAB code computes the numerator and denominator:

B=[0 0 1 0]’;

C=[1 0 0 0];

[num,den]=ss2tf(A,B,C,0,1);

8. Find a state model (A, B, C, D) for the system with transfer function

−2s2 + s + 1

G(s) = .

s2 − s − 4

9. Consider the parallel connection of G1 and G2 , the LTI systems with transfer functions

10 1

G1 (s) = , G2 (s) = .

s2 +s+1 0.1s + 1

(a) Find state models for G1 and G2 .

(b) Find a state model for the overall system.

10. The impedance of a 1-Farad capacitor is G(s) = 1/s. What is the inverse Laplace transform,

g(t)? What is the Fourier transform of g(t)? Is the Fourier transform equal to G(jω)? Hint:

No.

Solution From the Laplace transform table, the inverse Laplace transform of 1/s is the unit

step, g(t) = 1+ (t). The Fourier transform of g(t) happens to be

1

πδ(jω) + .

jω

CHAPTER 3. THE LAPLACE TRANSFORM 72

(You may have seen this in your Signals and Systems text.) Observe that merely setting

s = jω in the Laplace transform does not produce the Fourier transform. The following is

true, but beyond the scope of the course: If GLT (s) is the Laplace transform of a signal g(t)

that is zero for t < 0, if the ROC of GLT (s) includes the imaginary axis, and if GF T (jω) is

the Fourier transform of g(t), then GLT (jω) = GF T (jω).

ẋ = Ax + Bu, y = Cx + Du,

1

u = 0, x(0) = =⇒ y(t) = e−t − 0.5e−2t

0.5

and

−1

u = 0, x(0) = =⇒ y(t) = −0.5e−t − e−2t .

1

2

Find y(t) for u = 0, x(0) = .

0.5

(b) Now suppose

1

u a unit step, x(0) = =⇒ y(t) = 0.5 − 0.5e−t + e−2t

1

2

u a unit step, x(0) = =⇒ y(t) = 0.5 − e−t + 1.5e−2t .

2

Find y(t) when u is a unit step and x(0) = 0.

Solution This is an exercise in using linearity. In the first part, the three initial conditions

are related by

2 5 1 1 −1

= −

0.5 3 0.5 3 1

Thus

5 1

y(t) = (e−t − 0.5e−2t ) − (−0.5e−t − e−2t ).

3 3

In the second part

CHAPTER 3. THE LAPLACE TRANSFORM 73

0 −2 −1

A = 0 −1 0 .

1 0 0

Solution The eigenvalues of A are −1, ±j. Thus, no.

0 1

A= .

a 0

Solution The characteristic polynomial is s2 − a. By Routh-Hurwitz, the polynomial has a

zero in Re s ≥ 0 for every a. Thus, no.

M ÿ + Ky = 0,

where M > 0, K > 0. Take the natural state, x = (y, ẏ), and prove that the system is

unstable.

Solution The characteristic polynomial is s2 + (K/M ) = 0. The roots are purely imaginary.

15. The transfer function of an LC circuit is G(s) = 1/(LCs2 + 1). Show that the circuit is

unstable.

Solution In a state model the matrix A will have imaginary eigenvalues.

16. A rubber ball is tossed straight into the air, rises, then falls and bounces from the floor, rises,

falls, and bounces again, and so on. Let c denote the coefficient of restitution, that is, the

ratio of the velocity just after a bounce to the velocity just before the bounce. Thus 0 < c < 1.

Neglecting air resistance, show that there are an infinite number of bounces in a finite time

interval.

Hint: Assume the ball is a point mass. Let x(t) denote the height of the ball above the floor

at time t. Then x(0) = 0, ẋ(0) > 0. Model the system before the first bounce and calculate

the time of the first bounce. Then specify the values of x, ẋ just after the first bounce. And

so on.

17. The linear system ẋ = Ax can have more than one equilibrium point. Characterize the set of

equilibrium points. Give an example A for which there’s more than one.

Solution The equilibria are the vectors x satisfying Ax = 0. The set of such vectors is called

the nullspace, or kernel, of A. For the matrix

0 0

A= ,

0 0

CHAPTER 3. THE LAPLACE TRANSFORM 74

LL R

R

+

+ ++

uu C 1

C_1 C2 yy

C_2

−- −-

19. Consider a system modeled by the nonlinear state equation ẋ = f (x, u), where the state x has

dimension 2, the input u has dimension 1, and where

2

x1 x1 + x1 x2 + 2u

x= , f (x, u) = .

x2 −x21 x2 + u

equilibrium point, ending up with an equation of the form

˙ = A∆x + B∆u.

∆x

L

+

+ L +

+

u

u CC yy

−

- −-

(b) Is y bounded when u is the unit step?

(c) Give a bounded input u for which y is not bounded.

21. Refer to the standard feedback system. Find the final value of e(t) (if it exists) for this data:

s+2

r(t) = 1+ (t), d(t) = 0, P (s) = , C(s) = 1.

s+1

Repeat for

s−2

r(t) = e−t 1+ (t), d(t) = 0, P (s) = , C(s) = 1.

s+1

CHAPTER 3. THE LAPLACE TRANSFORM 75

Repeat for

1

r(t) = 0, d(t) = t1+ (t), P (s) = , C(s) = 10.

s(s + 3)

22. Consider the system with transfer function 1/(s + 1). Let the input be a periodic square wave

of minimum value 0 and peak value 1 (i.e., its DC value is 1/2), and of period 5 seconds. Find

the Fourier series of this input in the form

X

ak ejωk t .

k

That is, find the frequencies ωk and the coefficients ak . By linearity and time-invariance, the

output has the form

X

bk ejωk t .

k

Chapter 4

Control systems are most often based on the principle of feedback, whereby the signal to be con-

trolled is compared to a desired reference signal and the discrepancy used to compute corrective

control action. When you go from your home to the university you use this principle continu-

ously without even thinking about it. To emphasize how effective feedback is, imagine you have

to program a mobile robot, with no vision capability and no GPS, and therefore no feedback, to

go open loop from your home to your university classroom; the program has to include all motion

instructions to the motors that drive the robot: Turn the right wheel 2.7285 radians, then turn

both wheels 6.522 radians, and so on. The program would be unthinkably long, and in the end the

robot would be way off target, as depicted in Figure 4.1, because the initial heading would be at

least slightly off.

In this chapter and the next we develop the basic theory and tools for feedback control analysis

and design in the frequency domain. “Analysis” means you already have a controller and you want

to study how good (or bad) it is; “design” of course means you want to design a controller to meet

certain specifications. The most fundamental specification is stability. Typically, good performance

requires high-gain controllers, yet typically the feedback loop will become unstable if the gain is too

high. The stability criterion we will study is the Nyquist criterion, dating from 1932. The Nyquist

criterion is a graphical technique involving the open-loop frequency response function, magnitude

and phase.

robot's path

class

home

76

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 77

There are two main approaches to control analysis and design. The first, the one we are doing

in this course, is the older, so-called “classical” approach in the frequency domain. Specifications

are based on closed-loop gain, bandwidth, and stability margin. Design is done using Bode plots.1

The second approach, which is the subject of a second course on control, is in the time domain and

uses state-space models instead of transfer functions. Specifications may be based on closed-loop

poles. This second approach is known as the “state-space approach” or“modern control”, although

it dates from the 1960s and 1970s.

These two approaches are complementary. Classical control is appropriate for a single-input,

single-output plant, especially if it is open-loop stable. The state-space approach is appropriate for

multi-input, multi-output plants; it is especially powerful in providing a methodical procedure to

stabilize an unstable plant. Stability margin is very transparent in classical control and less so in the

state-space approach. Of course, simulation must accompany any design approach. For example,

in classical control you typically design for a desired bandwidth and stability margin; you test your

design by simulation; you evaluate, and then perhaps modify the stability margin, redesign, and

test again.

Beyond these two approaches is optimal control, where the controller is designed by minimizing a

mathematical function. In this context classical control extends to H∞ optimization and state-space

control extends to Linear-quadratic Gaussian (LQG) control.

For all these techniques of analysis and design there are computer tools, the most popular being

the Control System Toolbox of MATLAB.

In this section we use the cart-pendulum example to introduce the important concept of stability

of a feedback system.

1. The setup. Figure 4.2 shows the linearized cart-pendulum. The figure defines x1 , x2 . Now

define x3 = ẋ1 , x4 = ẋ2 . Take M1 = 1 kg, M2 = 2 kg, L = 1 m, and g = 9.8 m/s2 . Then the

state model is

0 0 1 0 0

0 0 0 1 0

ẋ = Ap x + Bp u, Ap = 0 −19.6 0 0 , Bp = 1 .

0 29.4 0 0 −1

Subscript p stands for “plant.” Let us designate the cart position as the only output: y = x1 .

Then

Cp = 1 0 0 0 .

1

One classical technique is called “root locus.” A root locus is a graph of the closed-loop poles as a function of a

single real parameter, for example a controller gain. In root locus you try to get the closed-loop poles into a desired

region (with good damping, say) by manipulating controller poles and zeros. You could easily pick this up if you wish

using MATLAB’s rltool.

The Author

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 78

January 29, 2008

x1 M2

x2

u

M1

Brief Article

The Author

Figure 4.2: Cart-pendulum.

January 29, 2008

X XX X

-5.4 -3.13 3.13 5.4

P (s) = Cp (sI − Ap )−1 Bp

1

= Cp adj(sI − Ap ) Bp

det(sI − Ap )

s2 − 9.8

= 2 2 .

s (s − 29.4) 1

The poles and zeros of P (s) are shown in Figure 4.3. Having three poles in Re s ≥ 0, the plant

is quite unstable. The right half-plane zero doesn’t contribute to the degree of instability, but,

as we shall see, it does make the plant quite difficult to control. We can understand intuitively

why that is so: To go forward, the cart must initially go backward in order to tilt the pendulum

forward. The block diagram of the plant by itself is shown in Figure 4.4.

2. Wrong notion. It might occur to you to stabilize the1 cart-pendulum merely by canceling the

unstable poles, that is, by multiplying P (s) by a controller transfer function C(s) so that

the product P (s)C(s) has no right half-plane poles. This will not work; the standard

feedback loop will not be stable. Instead of canceling the unstable poles, we must move

them into the left half-plane by means of feedback. We will explain this very important point

during the remainder of this chapter.

The Author

January

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 29, 2008 79

u y

P (s)

force on Briefposition

Article of cart, m

cart, N

The Author

diagram 29, 2008

of the plant.

r e u y

C(s) P (s)

−

3. Correct way. Let us consider stabilizing the plant by feeding back the cart position, y, com-

paring it to a reference r, and setting the error r − y as the controller input, as shown in

Figure 4.5. Here C(s) is the transfer function of the controller to be designed. The signal d is

a possible disturbance force on the cart.

10395s3 + 54126s2 − 13375s − 6687

C(s) = .

s4 + 32s3 + 477s2 − 5870s − 22170

This controller was obtained by a more advanced method than is considered in this course. A

state realization of C(s) is

1

0 1 0 0 0

0 0 1 0 0

Ac =

0

, Bc =

0 0 1 0

22170 5870 −477 −32 1

Cc = −6687 −13375 54126 10395 .

The controller itself, C(s), is unstable, as is P (s). But when the controller and plant are

connected in feedback, then, as long as the loop remains unbroken, the standard feedback

loop is stable. If the pendulum starts to fall, the 1 controller causes the cart to move, in

The Author

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 80

0.1

5 t

1.5

1.0

0.5

0.0

ï0.5

ï1.0

ï1.5

0 2 4 6 8 10 12

the right direction, to make the pendulum tend to come vertical again. You’re invited to

simulate the standard feedback loop; for example, let r be the signal shown in Figure 4.6.

This corresponds to a command that the cart move right 0.1 m for 5 seconds, then return

to its original position. Figure 4.7 is a plot of the cart position x1 versus t. The cart moves

1

rather wildly as it tries to balance the pendulum—it is not a good controller design—but it

does stabilize.

5. Unstable controller. We mentioned that our controller C(s) is open-loop unstable. It can be

proved (it is beyond the scope of this course) that every controller that stabilizes this P (s)

is itself unstable. The general result2 is this: There exists a stable C(s) that stabilizes an

unstable P (s) if and only if this pairity test holds: between every pair of real zeros of P (s)

in Re s ≥ 0 there exists an even number of poles. For the cart-pendulum,

s2 − 9.8

P (s) = .

s2 (s2 − 29.4)

2

A proof can be found in Feedback Control Theory, by J. Doyle, B. Francis, and A. Tannenbaum.

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 81

√ count the point s = +∞ as a real zero for this√result, so the real zeros in Re s ≥ 0 are

We

9.8, +∞. Between these two zeros is just one pole 29.4, not an even number of poles. Thus

P (s) fails the pairity test.

6. Feedback stability. Let us define what it means for the standard feedback loop in Figure 4.5

to be stable. For this, we choose to view the system as having inputs (r, d) and outputs (e, u),

although, as we shall see, these choices don’t affect the definition of stability. Take the states

of P (s) and C(s) to be, respectively, xp and xc , and then take the state of the combined

standard feedback loop to be xcl = (xp , xc ). You can derive this model:

r

ẋcl = Acl xcl + Bcl (4.1)

d

e r

= Ccl xcl + (4.2)

u d

Ap B p Cc 0 Bp

Acl = , Bcl =

−Bc Cp Ac −Bc 0

−Cp 0

Ccl = .

0 Cc

According to our definition in Section 3.6, Paragraph 8, this system is stable if and only if

the eigenvalues of Acl all lie in the open left half-plane. You can check this is true using this

MATLAB code:

B_c=[0;0;0;1];

C_c=[-6687 -13375 54126 10395];

A_p=[0 0 1 0;0 0 0 1;0 -19.6 0 0;0 29.4 0 0];

B_p=[0;0;1;-1];

C_p=[1 0 0 0];

eig(A_cl)

7. Summary. The stability of the standard feedback loop is characterized by the eigenvalues of

the closed-loop A-matrix, Acl . Since stability depends only on the Acl matrix, this confirms

that the choice of inputs and outputs in Paragraph 6 was not important. In fact we could

have set r, d to zero and derived the equation ẋcl = Acl xcl .

We continue with the block diagram in Figure 4.8.

The Author

January

CHAPTER 4. THE FEEDBACK LOOP AND 29, 2008

ITS STABILITY 82

r e u y

C(s) P (s)

−

1. Notation. The notation is this: P (s) is the plant transfer function, C(s) is the controller

transfer function, r(t) is a reference (or command) input, e(t) is the tracking error, d(t) is

a disturbance, u(t) is the plant input, and y(t) is the plant output. From now on the plant

and controller are single-input, single output. That is, P (s) and C(s) are not matrices. We

shall assume throughout that P (s), C(s) are rational, C(s) is proper, and P (s) is strictly

proper.

2. From now on. We are going to emphasize transfer function models instead of state models.

But it is important to understand that the two models are more-or-less equivalent.

3. Closed-loop transfer functions. We now derive the transfer function model for the standard

feedback loop. As in the preceding section, we choose to view the system as having inputs

(r, d) and outputs (e, u). Write the equations in the Laplace domain for the outputs of the

summing junctions:

E = R − PU

U = D + CE.

1 P E R 1

= . (4.3)

−C 1 U D

Np Nc

P = , C= . (4.4)

Dp Dc

There is no reason why Np and Dp would have common factors; likewise for Nc and Dc . But

it could happen that Np and Dc have a common factor, or that Nc and Dp have a common

factor. Substitute (4.4) into (4.3) and clear fractions:

Dp N p E Dp 0 R

= .

−Nc Dc U 0 Dc D

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 83

E 1 Dc −Np Dp 0 R

=

U Dp Dc + Np Nc Nc Dp 0 Dc D

1 Dp Dc −Dc Np R

= (4.5)

Dp Dc + Np Nc D N

p c D D

p c D

On the other hand, the state model for the same system has the form (see (4.1) and (4.2))

r

ẋcl = Acl xcl + Bcl

d

e r

= Ccl xcl + Dcl

u d

from which

E 1 R

= Ccl adj (sI − Acl ) Bcl + Dcl

U det(sI − Acl ) D

1 R

= [Ccl adj (sI − Acl ) Bcl + det(sI − Acl ) Dcl ]

det(sI − Acl ) D

Dp Dc + Np Nc , det(sI − Acl )

are equivalent, in that they have the same zeros. Therefore we are justified in calling Dp Dc +

Np Nc , product of denominators plus product of numerators, the closed-loop characteristic

polynomial.

4. Theorem. The closed-loop feedback system is stable if and only if the zeros of Dp Dc + Np Nc

are all in the open left half-plane.

5. Examples. Consider

1

P (s) = , C(s) = K.

s−1

The plant is open-loop unstable, having a pole at s = 1. The characteristic polynomial of the

feedback system is s − 1 + K = s + K − 1. Thus the feedback system is stable if and only if

K > 1, As another example take

1 s−1

P (s) = , C(s) = .

s2 −1 s+1

The plant has an unstable pole at s = 1, which the controller cancels. But the closed-loop

characteristic polynomial is

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 84

6. Robustness. Consider Figure 4.8 and assume the closed loop is stable. If we slightly perturb

the coefficients in the numerator or denominator, or both, of P (s), the closed loop will still be

stable. This follows from the mathematical fact that the zeros of a polynomial are continuous

functions of the coefficients of the polynomial. So if all the zeros satisfy Re s < 0, they

continue to do so when the coefficients are perturbed. We say that feedback stability is a

robust property of a feedback loop.

7. Important point. An unstable plant cannot be stabilized by cancelling unstable poles. If P (s)

has a pole at s = p with non-negative real part, and if C(s) cancels that pole, the characteristic

polynomial will still have a zero at s = p. Unstable plants can be stabilized only by moving

unstable poles into the left half-plane.

1 P E R

=

−C 1 U D

becomes

−1

E 1 P R

= .

U −C 1 D

The solution is

1 P

−

E 1 + PC

R .

1 + PC

=

U C 1 D

1 + PC 1 + PC

So, for example, the transfer function from D to E is −P/(1 + P C).

9. General case. How to find closed-loop transfer functions in general: Figure 4.9 shows a

somewhat complex block diagram. Suppose we want to find the transfer function from r to y.

In the Laplace domain the block diagram is a graphical representation of algebraic equations.

We merely write and then solve the equations. First, label using the symbol xi (in the time

domain) the outputs of the summing junctions. Then write the equations at the summing

junctions:

X1 = R − P2 P5 X2

X2 = P1 X1 − P2 P4 X2

Y = P3 X1 + P2 X2 .

Assemble as

1 P2 P5 0 X1 R

−P1 1 + P2 P4 0 X2 = 0 .

−P3 −P2 1 Y 0

The Author

February

CHAPTER 4. THE FEEDBACK LOOP AND ITS 1, 2008

STABILITY 85

P3

r x2 y

P1 P2

x1

− −

P4

P5

1 P2 P5 R

det −P1 1 + P2 P4 0

−P3 −P2 0

Y = .

1 P2 P5 0

det −P1 1 + P2 P4

0

−P3 −P2 1

Simplify:

P1 P2 + P3 (1 + P2 P4 )

Y = GR, G= .

1 + P2 P4 + P1 P2 P5

In this section we study Figure 4.10 and the performance requirement that the plant output y(t)

1

should follow a specified reference signal r(t). Of course there is the additional requirement of

stability of the feedback loop—that is always a requirement.

1. Cruise control. In cruise control in a car, you set the reference speed, say 100 km/h, and a

controller regulates the speed to a prescribed setpoint. How does this work? The answer lies

in the final-value theorem. As a simple example, suppose

1 100

P (s) = , C(s) = 1, R(s) = .

s+1 s

The Author

February

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 1, 2008 86

C(s) P (s)

−

The closed-loop characteristic polynomial is s + 2 and hence the feedback loop is stable. The

transfer function from R to E is

1 s+1

= .

1 + P (s)C(s) s+2

Thus

100(s + 1)

E(s) = .

s(s + 2)

Hence the steady-state tracking error is limt→∞ e(t) = 50. If we change the controller to

C(s) = 50, then limt→∞ e(t) = 100/51 ≈ 2. So increasing the controller gain from 1 to 50

reduces the tracking error from 50 to 2. High controller gain seems to be a good thing as far

as the tracking error is concerned. However high gain over a wide bandwidth is expensive. In

this example we need high gain only at DC since the input is a constant (i.e., a DC signal). Let

us therefore try the integral controller C(s) = 1/s, which has infinite gain at DC. This should

give zero steady-state error if only the feedback loop is stable. The closed-loop characteristic

polynomial is now

s(s + 1) + 1 = s2 + s + 1.

t→∞ s→0

1 100

= lim s

s→0 1 + P (s)C(s) s

1

= lim 1 100

s→0 1 +

s(s+1)

s(s + 1)

= lim 100

s→0 s(s + 1) + 1

1

= 0.

Notice that R(s) has a pole at s = 0 and so we can think of R(s) as being generated by an

integrator. We are endowing C(s) with an integrator too. This latter integrator is called an

internal model of the former.

February 1, 2008

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 87

u + Lθ

M

θ

L

u Mg

1 2s + 1

C(s) = , P (s) =

s s(s + 1)

and take r to be a ramp, r(t) = r0 t. Then R(s) = r0 /s2 and so

s+1

E(s) = r0 .

s3 + s2 + 2s + 1

Again e(t) → 0; perfect tracking of a ramp. Here C(s) and P (s) together provide the internal

model, a double integrator.

3. Generalization. Assume P (s) is strictly proper, C(s) is proper, and the feedback system is

stable. If P (s)C(s) contains an internal model of the unstable part of R(s), then perfect

asymptotic tracking occurs, i.e., e(t) → 0.

4. Example. Consider this problem:

r0 1

R(s) = , P (s) = .

s2 +1 s+1

Design C(s) to achieve perfect asymptotic tracking of the sinusoid r(t), as follows. From the

theorem, we should try something of the form

1

C(s) = C1 (s),

s2 +1

that is, we embed an internal model in C(s), and allow an additional factor to achieve feedback

stability. You can check that C1 (s) = s works. Notice that we have effectively created a notch

filter from R to E, a notch filter with zeros at s = ±j. 2

5. Example. An inverted pendulum balanced on your hand—see Figure 4.11. The equation is

1

ü + Lθ̈ = M gθ.

Thus

s2 U + s2 Lθ = M gθ.

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 88

r u y

C P

1

C(s) = P (0)

Figure 4.12: Perfect open-loop tracking.

−s2

.

Ls2 − M g

This plant transfer function has a zero at s = 0, that is, its DC gain is zero. Explain why this

is true: It is impossible to balance the pendulum at θ = 10 degrees.

6. The internal model principle. Consider a plant with transfer function P (s) and having the two

properties that, first, all poles have negative real part and, second, that the DC gain is nonzero.

Let the input to P (s) and the output from P (s) be denoted u(t) and y(t). Suppose we want

that y(t) should asymptotically track a constant reference signal r(t). If r(t) can be directly

measured, that is, we can put a sensor on it, then the open-loop controller C(s) = P (0)−1 with

input r(t) and output u(t) works perfectly. See Figure 4.12. This is open-loop control. So it

seems that an integrator (the internal model) is not necessary after all. But wait: The solution

in Figure 4.12 requires perfect knowledge of the DC gain of the plant. This is difficult if not

impossible to get. A controller C(s) is said to be non-robust if it doesn’t work as intended

when the plant model is perturbed slightly. By contrast, the feedback solution in Figure 4.10

with C(s) having an integrator—an internal model—is robust, because P (s) can be perturbed

slightly, or need not be modelled perfectly accurately. Indeed, as long as the feedback loop is

stable, it will remain so under sufficiently small perturbation of the coefficients of P (s). The

internal model principle is that only a feedback controller with an internal model is a robust

solution to the tracking problem.

The Nyquist criterion is a test for a feedback loop to be stable. The criterion is the greatest thing

since sliced bread and is based on the principle of the argument from complex function theory. The

word “argument” refers to the angle of a complex number.

1. The Principle of the argument. The principle of the argument involves two things: a curve

in the complex plane and a transfer function. Consider, as in Figure 4.13, a closed path (or

curve or contour) in the s-plane, with no self-intersections and with negative, i.e., clockwise

(CW) orientation. We name the path D (because later it is going to have the shape of that

letter): Now let G(s) be a rational function that does not have a zero or a pole on the curve

D. For every point s in the complex plane, G(s) is a point in the complex plane. We draw

two copies of the complex plane to avoid clutter, s in one called the s-plane, G(s) in the other

The Author

February 2, 2008

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 89

s-plane

Brief Article

The Author

Figure 4.13: The path D.

February 2, 2008

s G(s)

D G

s−1

s-plane G-plane

Figure 4.14: Since D encircles the zero of G(s) so G encircles the origin once CW.

called the G-plane. As s goes once around D from any starting point, the point G(s) traces

out a closed curve denoted G, the image of D under G(s).

2. Example. Begin with G(s) = s − 1. We could have as in Figure 4.14. Notice that G is just

D shifted to the left one unit. Since D encircles one zero of G(s), G encircles the origin once

1

CW. Let is keep G(s) = s − 1 but change D as in Figure 4.15. Now D encircles no zero of

G(s) and G has no encirclements of the origin. Now consider instead

1

G(s) = .

s−1

The angle of G(s) equals the negative of the angle of s − 1:

From this we get that if D encircles the pole CW, then G encircles the origin once counter-

clockwise (CCW). Based on these examples, we now see the relationship between the number

of poles and zeros of G(s) in D and the number of times G encircles the origin.

1

Brief Article

The Author

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 90

February 2, 2008

s D G

s−1

1

Brief Article

The Author

Figure 4.15: D does not encircle the zero of G(s).

February 2, 2008

radius → ∞

3. Principle of the Argument. Suppose G(s) has no poles or zeros on D, but D encloses n poles

and m zeros of G(s). Then G encircles the origin exactly n − m times CCW.

Q

(s − zi ) 1

G(s) = K Qi

i (s − pi )

with K a real gain, {zi } the zeros, and {pi } the poles. Then for every s on D

If zi is enclosed by D, the net change in ∠(s − zi ) is −2π; otherwise the net change is 0. Hence

the net change in ∠G(s) equals m(−2π) − n(−2π), which equals (n − m)2π.

5. Nyquist contour. The special D we use for the Nyquist contour is shown in Figure 4.16. Then

G is called the Nyquist plot of G(s).

6. Leading up. If G(s) has no poles or zeros on D, then the Nyquist1 plot encircles the origin

exactly n − m times CCW, where n equals the number of poles of G(s) in Re s > 0 and m

Brief Article

The Author

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 91

February 2, 2008

D

poles x

equals the number of zeros of G(s) in Re s > 0. From this follows this important observation:

Suppose G(s) has no poles on D. Then G(s) has no zeros in Re s ≥ 0 iff G doesn’t pass

through the origin and encircles it exactly n times CCW, where n equals the number of poles

in Re s > 0. Thus we have a graphical test for a rational function not to have any zeros in

the right half-plane.

7. Clarifying idea. Note that G(s) has no poles on D iff G(s) is proper and G(s) has no poles on

the imaginary axis; and G(s) has no zeros on D iff G(s) is not strictly proper and G(s) has

no zeros on the imaginary axis.

8. Indenting. In our application, if G(s) actually does have poles on the imaginary axis, we have

to indent around them. You can indent either to the left or to the right; we shall always

indent to the right. See Figure 4.17. Note that we are indenting around poles of G(s) on the

imaginary axis. Zeros of G(s) are irrelevant at this point.

Now we apply the principle of the argument to derive the Nyquist criterion.

1

1. Setup. The setup is the standard feedback loop with plant P (s) and controller KC(s), where

K is a real gain and C(s) and P (s) are rational transfer functions. We’re after a graphical test

for stability involving the Nyquist plot of P (s)C(s). We could also have a transfer function

F (s) in the feedback path, but we’ll take F (s) = 1 for simplicity. We allow a variable gain K

to make the criterion a little more useful; with this assumption we need to draw the Nyquist

plot only once even though K is not necessarily fixed. The assumptions are these:

(a) P (s), C(s) are proper, with at least one of them strictly proper.

(b) The product P (s)C(s) has no pole-zero cancellations in Re s ≥ 0. We have to assume

this because the Nyquist criterion doesn’t test for it, and such cancellations would make

the feedback system not stable, as we saw before.

(c) The gain K is nonzero. This is made only because we are going to divide by K at some

point.

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 92

2. The Nyquist criterion. Let n denote the number of poles of P (s)C(s) in Re s > 0. Construct

the Nyquist plot of P (s)C(s), indenting to the right around poles on the imaginary axis. Then

the feedback system is stable iff the Nyquist plot doesn’t pass through − K1 and encircles it

exactly n times CCW.

Dp Dc + KNp Nc .

and therefore the feedback loop is stable iff this polynomial has no zeros in Re s ≥ 0. Since

we have assumed no unstable pole-zero cancellations, you can show that the following two

functions have the same right half-plane zeros:

Dp Dc + KNp Nc , 1 + KP (s)C(s).

that G(s) has no zeros in Re s ≥ 0. So we are going to apply the principle of the argument

to get a test for G(s) to have no RHP zeros. Note that G(s) and P (s)C(s) have the same

poles in Re s ≥ 0, so G(s) has precisely n there. Since D indents around poles of G(s) on

the imaginary axis and since G(s) is proper, G(s) has no poles on D. Thus by Paragraph 6

in the preceding section, the feedback system is stable iff the Nyquist plot of G(s) doesn’t

1 1

pass through 0 and encircles it exactly n times CCW. Since P (s)C(s) = G(s) − , this

K K

latter condition is equivalent to: the Nyquist plot of P (s)C(s) doesn’t pass through − K1 and

encircles it exactly n times CCW.

• The feedback loop is stable iff the characteristic polynomial has no right half-plane (RHP)

zeros.

• Because of the assumption about no RHP pole-zero cancellations, this is equivalent to

the condition that G = 1 + KP C has no RHP zeros.

• The principle of the argument says G encircles the origin n − m times CCW.

• Therefore m = 0, i.e., G has no RHP zeros, iff G encircles the origin n times CCW.

• Therefore feedback stability holds iff the Nyquist plot of P C encircles the point −1/K a

total of n times CCW.

5. Aside. There is a subtle point that may have occurred to you concerning the Nyquist criterion.

Consider the plant transfer function P (s). If it has an unstable pole, then the ROC of the

Laplace transform does not include the imaginary axis. And yet the Nyquist plot involves

the function P (jω), which is a Laplace transform evaluated on the imaginary axis, which is

outside the ROC. This may seem to be a contradiction, but it is not. In mathematical terms

we have employed analytic continuation to extend the function P (s) to be defined outside the

region of convergence of the Laplace integral.

The Author

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 93

February 2, 2008

B

D

A 1

B, C A

Nyquist plot of P C

C

In this section you’ll learn how to draw Nyquist plots and how to apply the Nyquist criterion.

1

P C(s) = .

(s + 1)2

Figure 4.18 shows the curve D and the Nyquist diagram. The curve D is divided into segments

whose ends are the points A, B, C. We map D one segment at a time. The points in the right-

hand plot are also labelled A, B, C, so don’t be confused: the left-hand A is mapped by P C

into the right-hand A. The first segment is from A to B, that is, the positive imaginary axis.

On this segment, s = jω and you can derive from

1

P C(jω) =

(jω + 1)2

that

1 − ω2 −2ω

Re P C(jω) = 2 2 2

, Im P C(jω) = .

(1 − ω ) + (2ω) (1 − ω )2 + (2ω)2

2

As s goes from A to B, P C(s) traces out the1 curve in the lower half-plane. You can see this by

noting that the imaginary part of P C(jω) remains negative, while the real part changes sign

once, from positive to negative. This segment of the Nyquist plot starts at the point 1, since

P C(0) = 1. As s approaches B, that is, as ω goes to +∞, P C(jω) becomes approximately

1 1

= − 2,

(jω)2 ω

which is a negative real number going to 0. This is also consistent with the fact that P C is

strictly proper. Next, the semicircle from B to C has infinite radius and hence is mapped

by P C to the origin. Finally, the line segment from C to A in the left-hand graph is the

The Author

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 94

February 2, 2008

A

B

D

B, C

Ax -2

D

-1

C D

complex conjugate of the already-drawn segment from A to B. So the same is true in the

right-hand graph. (Why? Because P C has real coefficients, and therefore P C(s) = P C(s).)

In conclusion, we’ve arrived at the closed path in the right-hand graph—the Nyquist plot. In

this example, the curve has no self-intersections.

Now we are ready to apply the Nyquist criterion and determine for what range of K the

feedback system is stable. The transfer function P C has no poles inside D and therefore

n = 0. So the feedback system is stable iff the Nyquist plot encircles −1/K exactly 0 times

CCW. This means, does not encircle it. Thus the conditions for stability are −1/K < 0 or

−1/K > 1; that is, K > 0 or −1 < K < 0; that is, K > −1, K 6= 0. The condition K 6= 0

is ruled out by our initial assumption (which we made only because we were going to divide

by K). But now, at the end of the analysis, we can check directly that the feedback system

actually is stable for K = 0. So finally the condition for stability is K > −1. You can

readily confirm this by applying Routh-Hurwitz to the closed-loop characteristic polynomial,

(s + 1)2 + K.

s+1

P C(s) =

s(s − 1)

1

for which

2 1 − ω2

Re P C(jω) = − , Im P C(jω) = .

ω2 + 1 ω(ω 2 + 1)

The plots are shown in Figure 4.19. Since there is a pole at s = 0 we have to indent around

it. As stated before, we will always indent to the right. Let is look at A, the point jε, ε small

and positive. We have

2 1 − ε2

Re P C(jε) = − ≈ −2, Im P C(jε) = ≈ +∞.

ε2 + 1 ε(ε2 + 1)

The Author

CHAPTER 4. THE FEEDBACK LOOP February

AND ITS 2,

STABILITY

2008 95

+1 G

D

Cx

B D D

E A

A 1

G

x

F -1

F B

E

This point is shown on the right-hand graph. The mapping of the curve segment ABCD

follows routinely. Finally, the segment DA. On this semicircle, s = εejθ and θ goes from π/2

to +π/2; that is, the direction is CCW. We have

1

PC ≈ − = −εe−jθ = εej(π−θ) .

εejθ

Since θ is increasing from D to A, the angle of P C, namely π − θ decreases, and in fact

undergoes a net change of −π. Thus on the right-hand graph, the curve from D to A is a

semicircle of infinite radius and the direction is CW. There is another, quite nifty, way to see

this. Imagine a particle making one round trip on the D contour in the left-hand graph. As

the particle goes from C to D, at D it makes a right turn. Exactly the same thing happens

in the right-hand graph: Moving from C to D, the particle makes a right turn at D. Again,

on the left-hand plot, the segment DA is a half-revolution turn about a pole of multiplicity

one, and consequently, the right-hand segment DA is a half-revolution too, though opposite

direction.

Now to apply the Nyquist criterion, n = 1. Therefore

1 we need exactly 1 CCW encirclement

of the point −1/K. Thus feedback stability holds iff −1 < −1/K < 0; equivalently, K > 1.

1

P C(s) =

(s + 1)(s2 + 1)

for which

1 −ω

Re P C(jω) = 4

, Im P C(jω) = .

1−ω 1 − ω4

You should be able to draw the graphs based on the preceding two examples. See Figure 4.20.

To apply the criterion, we have n = 0. Feedback stability holds iff −1/K > 1; equivalently,

−1 < K < 0.

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 96

Nyquist Diagram

0.8

0.6

0.4

0.2

Imaginary Axis

0

−0.2

−0.4

−0.6

−0.8

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

Real Axis

4. Rationale. You might wonder why you have to learn to draw Nyquist plots instead of letting

MATLAB draw them. The reason is simple: MATLAB cannot draw them. More specifically,

it cannot draw them near an open-loop pole, and there frequently is an open-loop pole on the

imaginary axis, for example with integral control.

In this section we see how insightful Nyquist plots can be.

1. Accuracy. First we clarify that the Nyquist plots in the preceding section are not accurate—

they were drawn for visual clarity. For example, Figure 4.21 is the accurate Nyquist plot of

1/(s + 1)2 shown inaccurately in Figure 4.18. Figure 4.21 was generated using MATLAB and

the code

s=tf(’s’);

G=1/(s+1)^2;

nyquist(G)

Looking at Figure 4.21 alone, can we tell that the feedback system with unity gain (K = 1) is

stable? No. By the Nyquist criterion, we must know how many encirclements there must be

of the critical point −1, that is, we must know the number of open-loop unstable poles there

are. In Figure 4.21 there are no encirclements of −1; therefore there must be no unstable

open-loop poles for us to conclude feedback stability.

2. MATLAB’s limitations. We also clarify that, if there are imaginary axis open-loop poles,

MATLAB will not do the indentation around them, and therefore the Nyquist plot computed

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 97

Nyquist Diagram

1

0.8

0.6

0.4

0.2

Imaginary Axis

0

−0.2

−0.4

−0.6

−0.8

−1

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

Real Axis

by MATLAB will not be a closed contour. Consequently, you yourself must close the contour

to be able to count encirclements.

3. Continued. Let us continue with Figure 4.21, the Nyquist plot of

1

P (s)C(s) = .

(s + 1)2

The feedback loop is stable for all positive K. This P (s)C(s) has no right half-plane zeros

and we say it is therefore minimum phase, for a reason to be given now. Suppose that we

multiply P (s)C(s) by the factor (1 − s)/(1 + s). Then

1 1−s

P (s)C(s) = 2

.

(s + 1) 1 + s

The Nyquist plot of this P (s)C(s) is Figure 4.22. Look at the effect of the factor (1−s)/(1+s).

The Nyquist plot now intersects the negative real axis at −1/2, and therefore, starting at

K = 1, if K is increased the feedback loop will become unstable at K = 2. Thus the extra

factor (1 − s)/(1 + s) in P (s)C(s) has reduced the amount of beneficial high gain we may

employ. The factor

1−s

1+s

is called an all-pass transfer function: Its magnitude Bode plot equals 1 at all frequencies,

meaning it will pass a sinusoid of any frequency without attenuating its amplitude. An all-

pass transfer function has pole-zero symmetry with respect to the imaginary axis: If there is

a zero at s = z, then there is a pole at its reflection across the imaginary axis. A transfer

function with at least one right half-plane zero is said to be non-minimum phase, because

the right half-plane zero has made the phase more negative than it would be without the right

half-plane zero. The conclusion is that the achievable performance for a non-minimum-phase

plant is generally less than for the analogous minimum-phase plant.

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 98

Nyquist Diagram

1.5

0.5

Imaginary Axis

0

−0.5

−1

−1.5

−1 −0.5 0 0.5 1 1.5 2

Real Axis

4. Time delay. A more familiar illustration of the preceding point is offered by the effect of

sensor-actuation non-collocation. Imagine a shower stall where the valve to adjust the ratio of

hot-to-cold water is some distance before the shower head. The valve is the actuator and your

skin contains the temperature sensors. Suppose you are standing in the shower with the water

running, and the temperature of the water at your skin is too cold. You adjust the valve, but

it takes some time to feel the change in water temperature at your skin. It is clearly harder

to control the water temperature with this delay. Mathematically, it is harder to control the

plant P (s)e−sT than it is to control the plant P (s). The time-delay term e−sT is an example

of a non-rational all-pass transfer function.

5. Distance from instability. If a feedback loop is stable, how stable is it? In other words, how

far is it from being unstable? This depends entirely on our plant model, how we got it, and

what uncertainty there is about the model. In the frequency domain context, uncertainty is

naturally measured in terms of magnitude and phase as functions of frequency. There are

three measures of stability margin.

6. Phase margin. The first measure is called the phase margin. Consider

1

C(s) = 2, P (s) = .

(s + 1)2

The Nyquist plot of P (s)C(s) is shown in Figure 4.23. Since the gain of 2 is incorporated into

the Nyquist plot, the critical point is −1. There are no encirclements of the critical point, so

the feedback system is stable. The phase margin is related to the distance from the critical

point −1 to the point where the Nyquist plot crosses the unit circle. Precisely, draw the unit

circle and draw the straight radial line from the origin through the point where the circle

intersects the Nyquist plot. See Figure 4.24. The phase margin is the angle shown as PM.

In the present example, this equals 90 degrees. So mathematically,

PM = −180 − (the phase of P (jω)C(jω) at ω where |P (jω)C(jω)| = 1.)

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 99

Nyquist Diagram

1.5

0.5

Imaginary Axis

0

−0.5 PM

−1

−1.5

−1 −0.5 0 0.5 1 1.5 2

Real Axis

The value of ω where |P (jω)C(jω)| = 1 is called the gain crossover frequency, denoted ωgc .

MATLAB has a function to compute PM. Phase margin is widely used to predict “ringing”,

meaning, oscillatory response. But it must be used with caution. It does not measure how

close the feedback loop is to being unstable. More precisely, if the phase margin is small (say

5 or 10 degrees), then you are close to instability and there may be ringing, but if the phase

margin is large (say 60 degrees), then you can’t conclude anything—the Nyquist plot could

still be dangerously close to the critical point.

7. Gain margin. The second measure of stability margin is called the gain margin. It is a measure

of the distance from the critical point to where the Nyquist plot intersects the negative real

axis. Consider Figure 4.22. The critical point is −1 and therefore the nominal gain is K = 1.

The gain can be increased until − K1 = − 12 , that is, K = 2. Gain margin is measured in dB,

so

GM = 20 log10 2 = 6 dB.

Mathematically,

8. Stability margin. The third measure of stability margin is called the stability margin. It

equals the actual distance from the critical point to the closest point on the Nyquist plot.

This is the best of the three. One can construct a pernicious example where the phase and

gain margins are acceptable and yet the Nyquist plot comes dangerously close to the critical

point.

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 100

Nyquist Diagram

6

Imaginary Axis

0

−2

−4

Brief Article

−6

−4 −3.5 −3 −2.5 −2 −1.5 −1 −0.5 0

Real Axis

The Author

Figure 4.25: Nyquist plot of 2(s + 1)/(s(s − 1)) via MATLAB.

February 2, 2008

-2

9. Be careful. We emphasize that the stability margins are defined only when the feedback loop

is stable—the system has to be stable before it has a margin of stability.

10. A final example. Take

s+1

C(s) = 2, P (s) = .

s(s − 1)

The Nyquist plot of P (s)C(s) as computed by MATLAB is shown in Figure 4.25. MATLAB

cannot close the curve, so we must. See Figure 4.26. The critical point is −1 and we need

1 CCW encirclement, so the feedback system is stable. The phase margin computed by

MATLAB is 37 degrees. We see from Figure 4.26 that, starting from K = 1, we can increase

K without limit without instability occurring, but we can decrease K only to 0.5 at which point

the feedback loop is unstable. MATLAB computes the gain margin to be −6 dB, meaning

the gain can be reduced by a factor of 2 until instability occurs. The stability margin, as read

from the Nyquist plot, is about 1/2. The precise value is 0.57 and is computed by plotting

the Bode plot of 1/(1 + P C), reading off the peak magnitude, and taking the reciprocal.

1

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 101

Bode Diagram

40

35

30

Magnitude (dB)

25

20

15

10

0

90

Phase (deg)

45

0

−3 −2 −1 0 1

10 10 10 10 10

Frequency (rad/s)

Figure 4.27: Bode plot of G(s) = T s + 1, T = 10, piecewise-linear approximation in dashed line.

1. Limitation of Nyquist. Suppose you have tentatively selected a controller but the performance

under simulation is inadequate and you therefore want to improve the controller by altering it.

The Nyquist plot tells all we need to know about stability of a feedback loop. Its limitation is

that it is not very convenient for designing a controller. The Bode plot, based on the fact that

the logarithm of a product equals the sum of the logarithms, was invented for the purpose of

design, where a plant P (s) is multiplied by a controller C(s). Therefore we need to translate

the Nyquist criterion into a condition on Bode plots.

2. Review of Bode plots. We now review Bode plots. The Bode plot of a transfer function G(s)

consists of two diagrams: the magnitude of G(jω) versus ω and the angle of G(jω) versus ω.

The axes are as follows: ω is in radians/second on a logarithmic scale; |G(jω)| is converted

to decibels and then the scale is linear in dB; ∠G(jω) is in degrees on a linear scale. You are

encouraged to be able to draw simple Bode plots by hand—we will see why this is a useful

skill. For example, G(s) = sn , n a positive integer. The magnitude is |G(jω)| = ω n . On a

log-log scale the graph of ω n versus ω is a straight line of slope n. With the vertical axis in

dB, the graph is a straight line of slope 20n dB per decade. That is, if ω is multiplied by 10,

20n dB is added to the magnitude. On the other hand, the graph of the phase of G(s) is a

horizontal line of 90n degrees.

3. Continued. Terms like G(s) = T s + 1, T positive, come up frequently. For T = 10 the Bode

plot and its piecewise-linear approximation are shown in Figure 4.27. At low frequency the

magnitude is approximately 1, or 0 dB. At high frequency the magnitude is approximately

T ω, the graph of which is a straight line of slope 20 dB/dec. These two straight lines meet at

the point where ω = 1/T and the magnitude is 0 dB. So the piecewise-linear approximation

of the magnitude plot is as shown. The piecewise-linear approximation of the phase is the

dashed line shown. The two corners are at frequencies 1/10T and 10/T .

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 102

Bode Diagram

40

20

Magnitude (dB) 0

−20

−40

−60

180

135

Phase (deg)

90

45

0

−1 0 1 2

10 10 10 10

Frequency (rad/s)

40s2 (s − 2)

G(s) = .

(s − 5)(s2 + 4s + 100)

Its Bode plot is drawn in Figure 4.28. This was produced by MATLAB. Notice that the

magnitude axis is in decibels, the phase axis in degrees, and the frequency axis in radians/s.

This G(s) has a right half-plane zero and a right half-plane pole; also two zeros at the origin;

and lightly damped resonant complex poles. For s = jω and small ω,

which has magnitude equal to zero at DC and that increases with slope 40 dB/decade, and

that has 180 degrees phase. At high frequency G(jω) ≈ 40, which has zero phase.

5. Stability margins. Let us now look at stability margins in terms of Bode plots. Consider

P (s)C(s) = 2/(s + 1)2 and assume there are no right half-plane pole-zero cancellations. The

Nyquist plot is in Figure 4.24. Since there are no unstable open-loop poles, and since there is

no encirclement of the critical point, the feedback loop is stable. As we see from the Nyquist

plot, the gain margin is infinite since the Nyquist plot does not cross the negative real axis,

while the phase margin is 90 degrees. The MATLAB commands

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 103

Bode Diagram

Gm = Inf dB (at Inf rad/s) , Pm = 90 deg (at 1 rad/s)

20

−40

−60

−80

0

−45

Phase (deg)

−90

−135

−180

−2 −1 0 1 2

10 10 10 10 10

Frequency (rad/s)

s=tf(’s’);

PC=2/(s+1)^2;

margin(PC)

produce Figure 4.29, which is the Bode plot together with the calculation of the gain and

phase margins. The phase margin is shown on the vertical line on the phase plot. In detail, at

the frequency where the magnitude of P C equals 1 (the gain crossover frequency), the phase

is 90 degrees above where it needs to be for stability. That is, at the gain crossover frequency,

90 degrees of additional phase lag can be accommodated before the feedback loop becomes

unstable.

6. Example. As a second example, let us take P (s)C(s) = 2/(s + 1)3 . The Nyquist plot is shown

in Figure 4.30. The closed loop satisfies the Nyquist criterion and therefore the feedback

loop is stable. We see that the gain margin is finite this time. The phase and gain margins

are shown explicitly in Figure 4.31. The gain margin is shown as the vertical line on the

magnitude plot, and the phase margin is shown as the vertical line on the phase plot. At the

phase crossover frequency, the gain can be increased by 12 dB before instability results.

7. Example. As a third example let us take P (s)C(s) = 2(s + 1)/(s(s − 1)). The Nyquist plot

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 104

Nyquist Diagram

1.5

0.5

Imaginary Axis

0

−0.5

−1

−1.5

−1 −0.5 0 0.5 1 1.5 2

Real Axis

Bode Diagram

Gm = 12 dB (at 1.73 rad/s) , Pm = 67.6 deg (at 0.766 rad/s)

10

−10

Magnitude (dB)

−20

−30

−40

−50

−60

0

−45

−90

Phase (deg)

−135

−180

−225

−270

−1 0 1

10 10 10

Frequency (rad/s)

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 105

Bode Diagram

Gm = −6.02 dB (at 1 rad/s) , Pm = 36.9 deg (at 2 rad/s)

60

40

Magnitude (dB)

20

−20

−40

−90

−135

Phase (deg)

−180

−225

−270

−2 −1 0 1 2

10 10 10 10 10

Frequency (rad/s)

is in Figure 4.26. We see that the Nyquist test holds—one CCW encirclement of the critical

point. The Bode plot is in Figure 4.32. The gain margin equals −6 dB, meaning the gain can

be reduced by half before instability results, and the phase margin equals 37 degrees.

8. Stability margin. Finally, the stability margin, SM. It is defined to be the distance from the

critical point −1 to the closest point on the Nyquist plot:

SM = min | − 1 − P (jω)C(jω)|

ω

= min |1 + P (jω)C(jω)|.

ω

Thus

1 1

= max .

SM ω |1 + P (jω)C(jω)|

1

S(s) =

1 + P (s)C(s)

is the transfer function from r to e in the standard feedback loop. We conclude that 1/SM

equals the maximum height of the Bode magnitude plot of the transfer function S = 1/(1 +

P C).

4.9 Problems

1. Consider the block diagram in Figure 4.8. Find the transfer function from d to y.

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 106

1 P E R

= .

−C 1 U D

−1

1 P 0

U= 0 1 .

−C 1 D

This gives

1

U= D.

1 + PC

Since Y = P U , so

P

Y = D

1 + PC

and we conclude that the transfer function from D to Y equals P/(1 + P C).

2. Consider the cart-pendulum system of Figure 4.2 but take the output y to be the pendulum

angle x2 instead of the cart position. Carry through the development in Section 4.1, Para-

graph 1 and see what happens. For example, try to see if you can stabilize the feedback

loop.

Solution Now the transfer function from u to y equals

−s2

.

s2 (s2 − 29.4)

−1

.

s2 − 29.4

This is the apparent plant from u to y, the pendulum angle. The feedback loop cannot

be stabilized because, for any feedback controller, the closed-loop matrix Acl will have two

eigenvalues at 0.

3. Consider the standard feedback loop where P (s) = 1/(s + 1) and C(s) = K.

(a) Find the minimum K > 0 such that the steady-state absolute error |e(t)| is less than or

equal to 0.01 when r is the unit step and d = 0.

(b) Find the minimum K > 0 such that the steady-state absolute error |e(t)| is less than or

equal to 0.01 for all inputs of the form

with d = 0.

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 107

1

E(s) = R(s)

1 + P (s)C(s)

1 1

= K s

1 + s+1

s+1

= .

s(s + 1 + K)

The final value of e(t) equals 1/(1 + K). Thus the minimum K satisfies

1

= 0.01,

1+K

i.e., K = 99.

For the second part, the steady-state e(t) is a sinusoid of amplitude

jω + 1

jω + 1 + K .

4j + 1

4j + 1 + K = 0.01,

i.e.,

p

K= 17 × 104 − 16 − 1 = 411.3.

1 s−1

P (s) = , C(s) = .

s2 − 1 s+1

Is the feedback loop stable?

Solution No. There’s a right-half plane pole-zero cancellation.

5 K2

P (s) = , C(s) = K1 + .

s+1 s

It is desired to find constants K1 and K2 so that (i) the closed-loop poles (i.e., roots of the

characteristic polynomial) lie in the half-plane Re s < −4 (this is for a desired speed of

transient response), and (ii) when r(t) is the ramp of slope 1 and d = 0, the final value of the

absolute error |e(t)| is less than or equal to 0.05. Draw the region in the (K1 , K2 )-plane for

which these two specs are satisfied.

Solution The characteristic polynomial is

s2 + (1 + 5K1 )s + 5K2 .

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 108

Define r = s + 4 so that the real part of s is less than 4 if and only if the real part of r is

negative. The polynomial in terms of r is

By Routh-Hurwitz, the zeros of the r-polynomial are in the open left half plane if and only if

the coefficients are positive:

s(s + 1) 1

E(s) = .

s(s + 1) + 5(K1 s + K2 ) s2

The final value of e(t) equals 1/5K2 , and thus the spec is

1

≤ 0.05.

5K2

The two specs together are therefore defined by the inequalities

6. Standard feedback loop. Suppose P (s) = 5/(s + 1), r(t) = 0, and d(t) = sin(10t). Design a

proper C(s) so that the feedback system is stable and e(t) converges to 0.

Solution The transfer function from d to e equals

P

− .

1 + PC

The Laplace transform D(s) has poles at s = ±10j. So for e(t) to converge to 0, the transfer

function P/(1 + P C) must have zeros at s = ±10j. For this to happen, C(s) must have poles

at s = ±10j. So let’s try the controller

as + b

C(s) = .

s2+ 100

We need to choose a, b, if possible, to stabilize the feedback loop. The characteristic polynomial

is

- j - KC(s) - P (s) -

−6

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 109

with

s+2 1

P (s) = , C(s) = .

s2 + 2 s

Sketch the Nyquist plot of P C. How many encirclements are required of the critical point for

feedback stability? Determine the range of real gains K for stability of the feedback system.

8. Repeat with

4s2 + 1

P (s) = , C(s) = 1.

s(s − 1)2

9. Repeat with

s2 + 1

P (s) = , C(s) = 1.

(s + 1)(s2 + s + 1)

s(4s2 + 5s + 4)

G(s) = .

(s2 + 1)2

How many times does it encircle the point (1, 0)? What does this say about the transfer

function G(s) − 1?

10 K

P (s) = , C(s) = .

(5s + 1)(s + 2) s

Find the range of gains K for which the feedback loop is stable.

−0.1s + 1

P (s) = .

s(s − 1)(s2 − 1)

13. The Bode plot of P (s)C(s) is given below (magnitude in absolute units, not dB; phase in

degrees). The phase starts at −180◦ and ends at −270◦ . Sketch the Nyquist plot of P (s)C(s).

Assuming the feedback system is stable, what are the gain and phase margins?

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 110

1

10

0

10

ï1

10

ï2

10

ï2 ï1 0 1

10 10 10 10

ï160

ï180

ï200

ï220

ï240

ï260

ï280

ï2 ï1 0 1

10 10 10 10

s2 + 1

C(s) = K, P (s) = .

(s2 + 2)(s − 2)

(a) Sketch the Nyquist plot of P (s). Include correct asymptotes to infinity and to the origin.

(b) Indicate the number of encirclements (and their orientations) of all possible critical points.

(c) For what range of K is the feedback system stable?

−0.1s + 1

P (s) = .

(s + 1)(s − 1)(s2 + s + 4)

What are the phases of P (jω) near DC and at high frequency. Using MATLAB, compute the

Bode plot of P , thus checking your answer.

Solution For s small, P (s) ≈ −1/4, whose phase is π or −π, that is, ±180 degrees. When s

is large, P (s) ≈ −0.1/s3 and so

0.1 0.1

P (jω) ≈ − 3

= .

(jω) jω 3

The angle of 1/j is −90 degrees, so this is the high-frequency phase. Thus the Nyquist plot

approaches the origin tangent to the negative imaginary axis.

4.7. PROBLEMS 93

13. Consider the standard feedback system with C(s) = K. The Bode plot of P (s) is given below

(magnitude in absolute units, not dB; phase in degrees). The phase starts at −180◦ and ends

CHAPTER 4. THE FEEDBACK

at −270 LOOP

◦ . You are also given AND

that P (s) ITS STABILITY

has exactly one pole in the right half-plane. For what 111

range of gains K is the feedback system stable?

1

10

0

10

!1

10

!2

10

!2 !1 0 1

10 10 10 10

!160

!180

!200

!220

!240

!260

!280

!2 !1 0 1

10 10 10 10

1 1

P1 (s) = , P2 (s) = .

s−1 10s − 1

Do you think one is harder to stabilize than the other?

Solution The transfer functions have equal DC gain and are unstable. One has a pole at s = 1

and the other at s = 0.1. One way to answer the question is to see how much gain is required

1 1

to stabilize. For P (s) = and C(s) = K, K > 1 is required, while for P (s) =

s−1 10s − 1

and C(s) = K, K > 1 is still required. From this viewpoint, there’s no difference.

17. Consider the standard feedback system with C(s) = K. The Bode plot of P (s) is given in

Figure 4.33 (magnitude in absolute units, not dB; phase in degrees). The phase starts at

−180◦ and ends at −270◦ . You are also given that P (s) has exactly one pole in the right

half-plane. For what range of gains K is the feedback system stable?

1

C(s) = 16, P (s) = e−s .

(s + 1)(30s + 1)(s2 /9 + s/3 + 1)

This plant has a time delay, making the transfer function irrational. It is common to use a

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 112

Bode Diagram

40

20

Magnitude (dB)

0

−20

−40

−60

180

135

Phase (deg)

90

45

0

−1 0 1 2

10 10 10 10

Frequency (rad/s)

Padé approximation of the time delay. The second order Padé approximation is

s2 − 6s + 12

e−s = ,

s2 + 6s + 12

which is a rational allpass function. Using this approximation in P (s), graph using MATLAB

the Bode plot of P (s)C(s). Can you tell from the Bode plot that the feedback system is

stable? Use MATLAB to get the gain and phase margins. Finally, what is the stability

margin (distance from the critical point to the Nyquist plot)?

19. Consider

10

P (s) = , C(s) = 5.

s2 + 0.3s + 1

The Bode plot of the transfer function S is shown in Figure 4.34.

(a) Show that the feedback system is stable by looking at the closed-loop characteristic

polynomial.

(b) What is the distance from the critical point to the Nyquist plot of P C?

(c) If r(t) = cos(t), what is the steady-state amplitude of the tracking error e(t)?

Solution

All the coefficients are positive; by Routh-Hurwitz, the feedback loop is stable.

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 113

(b) From the Bode plot, the peak magnitude of S equals about 25 dB, or 10(25/20) = 17.78.

Thus the stability margin equals 1/17.78 = 0.0562.

(c) The sinusoid r has magnitude 1 and is a sinusoid of frequency 1 rad/s. The magnitude

of S(jω) at ω = 1 equals 0.006. This equals the steady-state amplitude of e(t).

20. Consider the complex block diagram in Figure 4.9. What would be a good definition for the

characteristic polynomial of this system?

Solution The idea is to generalize what we did in Section 2.5, namely, build up a state model

for the overall system from state models for the components, and then take the characteristic

polynomial of the overall system to be the characteristic polynomial of the overall A-matrix.

For simplicity we will assume each transfer function is strictly proper. In the block diagram

erase x1 and x2 (because we need to use them for states) and set r = 0 (because we need

only the closed-loop A-matrix and not B or C). For each i = 1, . . . , 5, label ui as the input

and yi as the output of Pi (s); also, let xi denote the state. Then we have state models of the

following form:

ẋ1 = A1 x1 + B1 u1

..

.

ẋ5 = A5 x5 + B5 u5 .

Combine these equations by defining

x1 u1

x = ... , u = ...

x5 u5

A1 0 0 0 0 B1 0 0 0 0

0 A2 0 0 0

0 B2 0 0 0

A=

0 0 A3 0 0 ,

B=

0 0 B3 0 0

0 0 0 A4 0 0 0 0 B4 0

0 0 0 0 A5 0 0 0 0 B5

so that we have ẋ = Ax + Bu. Now write the output equation for each block,

y1 = C1 x1

..

.

y5 = C5 x5 ,

and assemble tham as y = Cx. Finally, write the input equations taking into account the

summing junctions

u1 = −y5

u2 = y1 − y4

u3 = −y5

u4 = y2

u5 = y2

CHAPTER 4. THE FEEDBACK LOOP AND ITS STABILITY 114

The chracteristic polynomial of the system equals the characteristic polynomial of the matrix

A + BKC. In particular, the system is stable iff all the eigenvalues of A + BKC have negative

real parts.

Chapter 5

Design

This chapter is an introduction to classical control design, which is for stable plants, or, at worst,

plants with poles at s = 0. We introduce phase lag and phase lead controllers, also called compen-

sators. These very simple, first-order controllers are used traditionally to increase the phase margin,

with the goal of reducing oscillations to a step change in reference input. The phase lag controller

does have a negative phase Bode plot, which would not be useful in itself because it would typically

reduce the phase margin and therefore move the feedback loop closer to instability. The phase lag

controller is actually used to move the gain crossover frequency to a lower frequency where the

phase margin is adequate. The phase lead controller, on the other hand, has positive phase and can

therefore be used to increase the phase margin.

PID (proportional-integral-derivative controllers) are limiting cases of lag and lead controllers.

In particular, the integral controller has a phase lag of 90 degrees at every frequency, and a

proportional-derivative controller has a phase lead at every frequency.

Design using phase lag and phase lead controllers is an example of loopshaping, meaning the

Bode plot or Nyquist plot of the open-loop transfer function P (s)C(s) is altered or shaped to

have desirable properties. Loopshaping is the basic design technique in classical control. Classical

control, having been developed for electric machines, cannot handle open-loop unstable plants. For

example, it cannot be used for the magnetic levitation system.

In the final section of the chapter we look at limitations to achievable performance. These

observations are due to Bode in his profound 1945 book Network Analysis and Feedback Amplifier

Design.

5.1 Loopshaping

1. Design problem. Consider the unity feedback system in Figure 5.1. The design problem is this:

Given P (s), the nominal plant transfer function, maybe some uncertainty bounds, and some

performance specs, design an implementable C(s). The performance specs would include,

as a bare minimum, stability of the feedback system. The simplest situation is where the

performance can be specified in terms of the transfer function

1

S := ,

1 + PC

115

CHAPTER 5. INTRODUCTION TO CLASSICAL CONTROL DESIGN 116

r e u y

C P

2. Terminology. Here’s the reason for this name. Denote by T the transfer function from r to y,

namely,

PC

T = .

1 + PC

Of relevance is the relative perturbation in T due to a relative perturbation in P :

∆T /T ∆T P

lim = lim

∆P →0 ∆P/P ∆P →0 ∆P T

dT P

= ·

dP T

d PC 1 + PC

= ·P ·

dP 1 + PC PC

= S.

plant transfer function.

3. Specs in terms of S. For us, S is important for two reasons: First, S is the transfer function

from r to e. Thus we want |S(jω)| to be small over the range of frequencies of r. Secondly, the

peak magnitude of S is the reciprocal of the stability margin. Thus a typical desired magnitude

plot of S is as in Figure 5.2. Here ω1 is the maximum frequency of r, ε is the maximum

permitted relative tracking error, ε < 1, and M is the maximum value of |S|, M > 1. If |S|

has this shape and the feedback system is stable, then for the input r(t) = cos ωt, ω ≤ ω1 we

have |e(t)| ≤ ε in steady state, and the stability margin 1/M . A typical value for M is 2 or 3.

In these terms, the design problem can be stated as follows: Given P, M, ε, ω1 ; design C so

that the feedback system is stable and |S| satisfies |S(jω)| ≤ ε for ω ≤ ω1 and |S(jω)| ≤ M

for all ω.

4. Example. Take

10

P (s) = .

0.2s + 1

This is a typical transfer function of a DC motor. Let’s take a P I controller:

K2

C(s) = K1 + .

s

CHAPTER 5. INTRODUCTION TO CLASSICAL CONTROL DESIGN 117

M

|S(jω)| ω1

1

1

S(s) = 10(K1 s+K2 )

1+ S(0.2s+1)

s(0.2s + 1)

= .

0.2s2 + (1 + 10K1 )s + 10K2

5s(0.2s + 1)

.

(s + K3 )2

But K3 is still freely designable. Sketch the Bode plot S and confirm that any M > 1, ε <

1, ω1 can be achieved.

lation.

1

transfer function L := P C instead of S = . Notice that if |L| is much greater than

1+L

1

1, then |S| ≈ |L| . A typical desired plot is shown in Figure 5.3. In shaping |L|, we don’t

have a direct handle on the stability margin, unfortunately. However, we do have control

over the gain and phase margins, as we’ll see. In the next two sections we present two simple

loopshaping controllers.

1. Cruise control. We tackle the example problem of cruise control of a hypothetical toy cart.

To get a plant model, we argue as follows: If we were to apply a constant force to the cart, it

would go at a constant speed. If we were to increase the force, the cart would gradually speed

Brief Article

The Author

CHAPTER 5. INTRODUCTION TO CLASSICAL CONTROL DESIGN 118

February 2, 2008

1

ε |P C| = |L|

1 ω1 ω

|P |

r e u v

C P

up, but its speed would not oscillate. Thus a plausible model is first order:

1

v̇ = u − v.

2

Here v is the velocity of the cart, u the force, and −v/2 a drag term. We would like the cart

velocity to follow our command, r. In particular, if we select an increase in speed, the cart

should respond briskly and accelerate.

1

2. Specs. In control engineering it is rarely appropriate to write down precise specifications that

we may or may not be able to achieve. In industry it is common to write the specs after the

design is finished and tested (when successful achievement of specs is guaranteed). It is more

sensible to rely on previous designs and work up incrementally, starting with the simplest

controller that is likely to work well.

3. Block diagram. The block diagram is as shown in Figure 5.4: u is the force, v the velocity,

and r the velocity command. The plant transfer function is

2

P (s) = .

2s + 1

The step response of P (s) is shown in Figure 5.5. The plot confirms that the DC gain (i.e.,

P (0)) equals 2 and the time constant (the value of T when P (s) is written as c/(T s + 1)) is

about 2 seconds. We want the DC gain from r to v to be 1; that is, we want the cart to go

at exactly the commanded speed in steady state. That requires integral control. So we take

CHAPTER 5. INTRODUCTION TO CLASSICAL CONTROL DESIGN 119

Step Response

2

1.8

1.6

1.4

1.2

Amplitude

1

0.8

0.6

0.4

0.2

0

0 2 4 6 8 10 12

Time (seconds)

C(s) = K/s. What value of K to take? Let us simply simulate for a few values of K to get

a sense of what response is feasible, recognizing that the larger the value of K, the more the

controller costs. The transfer function from r to v is

P (s)C(s) 2K

= 2 .

1 + P (s)C(s) 2s + s + 2K

Figure 5.6 shows closed-loop step responses for two values of K. The final values are correct

in both cases because of the integrator in C(s), but for K = 0.1 the response is too slow,

although there’s no overshoot, and for K = 0.8 the overshoot is too large.

4. Bode plot with first controller. Let us look at the controller C(s) = K/s, K = 0.8, in the

frequency domain. Figure 5.7 shows the Bode plot of P (s)C(s). The phase margin is 31

degrees. In general, if the PM is very small, then the stability margin SM is small too (the

converse is not necessarily true). A small PM indicates that there are closed-loop poles near

the imaginary axis. This is the cause of an oscillatory step response. If we could increase

the PM, we might reasonably expect to reduce the oscillations in the step response. Now 31

degrees is not a very small PM, but nevertheless let us see the effect of increasing it. So far

we have the controller 0.8/s, which shall be denoted by C1 (s). Let us adjoin this to P (s) and

define a modified plant

0.8 × 2

P1 (s) = C1 (s)P (s) =

s(2s + 1)

At the end, we will put C1 (s) back into the controller.

5. Phase lag methodology. Look again at the Bode plot of P1 (s), Figure 5.7. Our goal is to

increase the phase margin, to, say, 60 degrees, and we observe that there is 60 degrees of

phase lag, but it is at 0.313 radians/second, which is not the gain crossover frequency—the

gain equals 12.5 dB at this frequency. This suggests looking for a controller with a Bode plot

like that in Figure 5.8. At the frequency 0.313 and higher, the gain is −12.5 dB, while the

phase is nearly zero.

CHAPTER 5. INTRODUCTION TO CLASSICAL CONTROL DESIGN 120

Step Response

1.4

1.2

0.8

Amplitude

0.6

0.4

0.2

0

0 5 10 15 20 25 30

Time (seconds)

Figure 5.6: Closed-loop step responses v(t) vs. t for the controller C(s) = K/s; K = 0.1 (blue),

K = 0.8 (green).

Bode Diagram

Gm = Inf dB (at Inf rad/s) , Pm = 31.1 deg (at 0.827 rad/s)

50

Magnitude (dB)

−50

−90

Phase (deg)

−135

−180

−2 −1 0 1

10 10 10 10

Frequency (rad/s)

Figure 5.7: Bode plot of P (s)C1 (s), C1 (s) = 0.8/s. The phase margin is 31 degrees.

CHAPTER 5. INTRODUCTION TO CLASSICAL CONTROL DESIGN 121

Bode Diagram

0

−2

−4

Magnitude (dB)

−6

−8

−10

−12

−14

0

−10

−30

−40

−4 −3 −2 −1 0 1

10 10 10 10 10 10

Frequency (rad/s)

6. Controller. The transfer function having a Bode plot of the form in Figure 5.8 is

αT s + 1

C2 (s) = ,

Ts + 1

where α, T are positive real numbers, with α < 1. This is the phase lag controller. Its

characteristics are these:

(a) DC gain of 1.

(b) High-frequency gain of α.

(c) Phase lag concentrated in the frequency range from 0.1/T to 10/αT .

This last point is based on our analysis of the Bode plot of the term T s + 1 (and likewise

αT s + 1); see Section 4.8, Paragraph 3.

(b) Go to the frequency where the phase of P1 (s) gives the desired phase margin. In this

example it is 0.313 rad/s.

(c) Read off the magnitude of P1 (s). It is 12.5 dB in this example.

(d) Set α to be the reciprocal of this magnitude: α = −12.5 dB. That is,

α = 10−12.5/20 = 0.2371

(e) Equate the corner frequency 10/αT and the new gain crossover frequency, and solve for

T:

10

= 0.313 =⇒ T = 134.2

αT

CHAPTER 5. INTRODUCTION TO CLASSICAL CONTROL DESIGN 122

1.5

0.5

0

0 2 4 6 8 10 12 14 16 18 20

Figure 5.9: Closed-loop step responses: controller 0.8/s (blue); lag controller (green).

0.8 31.95s + 1

C(s) = C1 (s)C2 (s) = × .

s 134.2s + 1

Figure 5.9 shows the step responses for the controller C1 (s) and the controller C(s). The

stability margin (recall that this is the distance from −1 to the Nyquist plot of P (s)C(s)) is

0.70, a large value.

8. Summary. Let us summarize this example. The plant is first-order, stable, and has non-

unity DC gain. We wanted the output to be able to track a step with zero steady-state

error, to respond as quickly as possible, and preferably not have any overshoot. To get zero

steady-state error we first chose an integral controller. We adjusted the gain K by simulating

and compromising between speed of response and amount of overshoot. The overshoot was,

however appreciable. We tried a phase lag controller to increase the phase margin and thereby

decrease the overshoot. The overshoot did reduce, but at the cost of slower response. The

resulting stability margin was large, a consequence of the fact that this is a very easy plant

to control.

1. The phase lead transfer function. The phase lead controller has exactly the same form as the

phase lag controller,

αT s + 1

C(s) = , (5.1)

Ts + 1

except that α > 1 instead of α < 1. Look again at the Bode plot of the phase lag controller,

Figure 5.8. The Bode plot of a phase lead controller is shown in Figure 5.10. The piecewise-

linear approximation of the magnitude is shown too. There are two corner frequencies: the

CHAPTER 5. INTRODUCTION TO CLASSICAL CONTROL DESIGN 123

Bode Diagram

12

10

Magnitude (dB)

8

0 1 1

40 T

↵T

'max

30

Phase (deg) 20

10

0

−2 −1 0 1 2

10 10 10 !max 10 10

Frequency (rad/s)

Figure 5.10: A phase lead controller; piecewise-linear approximation of magnitude also shown.

corner 1/αT frequency at which the numerator magnitude |αT jω + 1| starts to increase, and

1/T at which the denominator magnitude |T jω+1| starts to increase. The high-frequency gain

is α. Also labelled on the figure are ωmax , the frequency where the phase angle is maximum,

and ϕmax , the maximum phase angle. The phase lead controller is used to increase the phase

angle at the frequency ωmax . However, this is complicated by the fact that the magnitude

is not 0 dB at the frequency ωmax . So when we try to increase the phase margin, the gain

crossover frequency will increase too.

2. Formulas. We’ll need three formulas valid for the transfer function (5.1):

1

ωmax = √ (5.2)

T α

√

|C(jωmax )| = α (5.3)

α−1

ϕmax = sin−1 (5.4)

α+1

1 1

3. Proofs. The frequency ωmax is the midpoint between the corners αT and T on the logarith-

mically scaled frequency axis. Thus

1 1 1

log10 ωmax = log10 + log10

2 αT T

1 1

= log10

2 αT 2

1

= log10 √ .

T α

This proves (5.2). The magnitude of C(jω) at ωmax is the midpoint between 1 and α on the

The Author

February

CHAPTER 5. INTRODUCTION TO CLASSICAL CONTROL DESIGN 2, 2008 124

θ √

α

ϕmax

√1

α

1

log10 |C(jωmax )| = (log10 1 + log10 α)

2 √

= log10 α.

This proves (5.3). Finally, the angle ϕmax is the angle of C(jωmax ), which equals the angle of

the complex number

√

αT jω + 1 1 + αj

= .

T jω + 1 ω=ωmax 1 + √1α j

sin ϕmax sin θ

√ 1 =q .

α − √α 1 + α1

1+α

√ α−1 1

1 1

sin ϕmax = α− √ √ q = .

α 1+α 1+ 1 α+1

α

4. Phase lead properties. Let us summarize the properties of the phase lead controller:

(a) DC gain of 1.

(b) High-frequency gain of α.

√

(c) Phase lead of ϕmax at the frequency ωmax . Gain of α at this frequency.

5. Design procedure. Now we return to the design example of Section 5.2. The plant is

2

P (s) = .

2s + 1

CHAPTER 5. INTRODUCTION TO CLASSICAL CONTROL DESIGN 125

For the reasons given in that section, we first apply the controller C1 (s) = 0.8/s. We shall

design a lead controller C2 (s) for

1.6

P1 (s) = P (s)C1 (s) = .

s(2s + 1)

The steps are these:

(b) Decide how much phase lead is needed. The phase margin for P1 (s) is 31 degrees. To

get to 60 degrees we would need to add 29 degrees. But the gain crossover frequency is

going to increase somewhat, to a frequency where the existing phase margin is less than

31 degrees. Let’s guess that adding 35 degrees will be adequate.

(c) Set ϕmax = 35 degrees and solve (5.4) for α: α = 3.69.

√

(d) From formula (5.3) we will be adding a gain of α = 1.92, or 5.67 dB, at the new gain

crossover frequency. The magnitude of P1 equals −5.67 dB at ω = 1.37 rad/s. Therefore

1.37 will be the new gain crossover frequency, and thus 1.37 is the value of ωmax .

(e) Solve (5.2) for T : T = 0.38.

1.4s + 1

C2 (s) = ,

0.38s + 1

whose Bode plot has already been drawn: Figure 5.10. The final controller is

0.8 1.4s + 1

C(s) = × .

s 0.38s + 1

The resulting stability margin is 0.76, slightly larger than for the lag controller. Figure 5.12

shows the closed-loop step response and compares it with that of the phase lag controller and

also just the integral controller 0.8/s. To test the robustness, i.e., tolerance to parameter

variations, of this controller, the drag term in the plant model was changed from −v/2 to

−v/3. For the same controller the step response is shown in Figure 5.13. The response

degrades (the overshoot is higher), but is still probably acceptable.

6. Finally, Figure 5.14 shows Bode plots of the sensitivity function S for the lag and lead con-

trollers. The lead controller is better from this point of view too, because |S| is smaller over

a wider low frequency range (better tracking capability) and its peak value is smaller (better

stability margin).

7. Summary. With the phase lead controller the overshoot did reduce very nicely and the re-

sponse speeded up. If no other factors are relevant, the phase lead controller is the best of

the three: It is the fastest, gives the least overshoot, and has a good stability margin.

5.4 Limitations

In this section we look at some limitations to classical control design, and also some limitations to

achievable performance. These will help us decide if a plant is easy or hard to control.

CHAPTER 5. INTRODUCTION TO CLASSICAL CONTROL DESIGN 126

1.5

0.5

0

0 2 4 6 8 10 12 14 16 18 20

Figure 5.12: Closed-loop step responses: controller 0.8/s (red), lag controller (blue), and lead

controller (green).

1.4

1.2

0.8

0.6

0.4

0.2

0

0 2 4 6 8 10 12 14 16 18 20

Figure 5.13: Closed-loop step responses of the lead controller: original plant (blue) and drag term

perturbed to −v/3 (green).

CHAPTER 5. INTRODUCTION TO CLASSICAL CONTROL DESIGN 127

Bode Diagram

20

−20

Magnitude (dB)

−40

−60

−80

−100

−120

135

90

Phase (deg)

45

0

−4 −3 −2 −1 0 1

10 10 10 10 10 10

Frequency (rad/s)

Figure 5.14: Bode plots of S: lag controller (blue); lead controller (green).

1. High gain is desirable but there’s a limit. As we saw, good tracking in a feedback loop requires

high gain at low frequency (in order that |S| be small at low frequency). However, one might

argue that instability will result if the loop gain is high enough. The reasoning here is that

“everything is infinite dimensional at high enough frequency.” That is, lumped models (finite

dimensional models of circuits and mechanical systems) become distributed. Therefore you

might argue that all physical things roll off with higher and higher phase lag as frequency

increases, and therefore the Nyquist diagram crosses the negative real axis at some frequency.

And therefore, the logic would demand, there is a finite gain margin. The problem with this

argument is that the linear model becomes less and less accurate as frequency increases. So

it becomes more and more of a stretch to talk about magnitude, phase, and Nyquist plots.

2. Unstable plants. Phase lag and phase lead controllers are sometimes useful, as shown in

the preceding two sections. However, for them to be applicable, the plant has to be quite

easy to control. The most they can do is improve a feedback loop that already works quite

well. Let us look closely at a more challenging plant and see if phase lag or phase lead can

help us. The maglev system of Section 3.6, Paragraph 12 is challenging to control (although

not as challenging as the cart-pendulum). The plant transfer function from voltage input to

ball-position output is

19.8

P (s) = − .

(s + 30)(s2 − 1940)

Recall that this is the transfer function of the nonlinear system linearized at a desired ball

position. The control objective is to levitate the ball, that is, to stabilize the linear feedback

loop. Can we stabilize P (s) using phase lag or phase lead controllers? Let us see. Let

C(s) be any phase

√ lag or phase lead controller. The plant P (s) has a right half-plane pole,

namely, at s = 1940. Thus for feedback stability the Nyquist plot of P (s)C(s) must have

exactly one CCW encirclement of the point −1. The Nyquist plot of P (s) itself is shown in

Figure 5.15. Notice the scaling on the axes: The plot lives in the small rectangle with corners

CHAPTER 5. INTRODUCTION TO CLASSICAL CONTROL DESIGN 128

−4

x 10 Nyquist Diagram

Imaginary Axis

0

−1

−2

−3

0 0.5 1 1.5 2 2.5 3 3.5

−4

Real Axis x 10

at (0, 0), (0.00035, 0), (0.00035, 0.0002), (0, 0.0002). Being small is not the problem—we could

easily use a controller to scale the Nyquist diagram up in size. The problem is that the Nyquist

plot has a CW orientation and this cannot be changed by a phase lag or phase lead controller.

Indeed, the loopshaping of classical control cannot help us in the stabilization problem.

3. Bode’s phase formula. We don’t need Bode’s precise formula. What is more instructive

is an approximate relationship that results from it. And this relationship actually requires

a rather strong assumption. For a plant P (s) and controller C(s), assume the loop gain

L(s) = P (s)C(s) is stable, minimum phase, and of positive DC gain. If the slope of |L(jω)|

near gain crossover (that is, near the frequency where the gain equals 1) is −n, then arg(L(jω))

π

at gain crossover is approximately −n . What we learn from this observation is that in

2

transforming |P | to |P C| via, say, lag or lead compensation, we should not attempt to roll

off |P C| too sharply near gain crossover. If we do, arg(P C) will be too large near crossover,

resulting in a negative phase margin and hence instability.

4. Example.

5. The waterbed effect. Turn to Figure 5.16. This concerns the ability to achieve the following

spec on the sensitivity function S: Let us suppose M > 1 and ω1 > 0 are fixed. Can we make

ε arbitrarily small? That is, can we get arbitrarily good tracking over a finite frequency range,

while maintaining a given stability margin (1/M )? Or is there a positive lower bound for ε?

The answer is that arbitrarily good performance in this sense is achievable if and only if P (s)

is minimum phase. Thus, non-minimum phase plants have bounds on achievable performance:

As |S(jω)| is pushed down on one frequency range, it pops up somewhere else, like a waterbed.

The precise result is this: Suppose P (s) has a zero at s = z with Re z > 0. Let A(s) denote

the allpass factor of S(s). Then there are positive constants c1 , c2 , depending only on ω1 and

z, such that

The Author

CHAPTER 5. INTRODUCTION TO CLASSICAL February

CONTROL 2, DESIGN

2008 129

ω1

1 ω

|S(jω)|

1−s

P (s) = , p > 0, p 6= 1

(s + 1)(s − p)

and assume p > 0 and p 6= 1, that is, the pole p is unstable and is not cancelled by a zero.

Let C(s) be a stabilizing controller. Then

1

S= ⇒ S(p) = 0.

1 + PC

s−p

Thus is an allpass factor of S. There may be other allpass factors, so what we can say

s+p

is that A(s) has the form

s−p

A(s) = A1 (s),

s+p

1

where A1 (s) is some allpass TF (may be 1). In this example, the RHP zero of P (s) is s = 1.

Thus

1 − p

|A(1)| = |A1 (1)|.

1 + p

1 − p

|A(1)| ≤

1 + p

and hence

−1

1 + p

|A(1) | ≥ .

1 − p

CHAPTER 5. INTRODUCTION TO CLASSICAL CONTROL DESIGN 130

+ +

Vs Rl vo

S

+ +

Vs Rl vo

1 + p

c1 log ε + c2 log M ≥ log

.

1 − p

Thus, if M > 1 is fixed, log ε cannot be arbitrarily negative, and hence ε cannot be arbitrarily

small. In fact the situation is much worse if p ≈ 1, that is, if the RHP plant pole and zero are

close.

Many consumer electronic appliances, such as laptops, have electronic components that require

regulated voltages. The power may come from a battery. For example, a regulated value of 2 V

may be needed from a battery rated at 12 V. This raises the problem of stepping down a source

voltage Vs to a regulated value at a load. In this section we study the simplest such voltage regulator.

1. Let’s first model the load as a fixed resistor, Rl (subscript “l” for “load”). Then of course

a voltage dividing resistive circuit suggests itself1 : See Figure 5.17. This solution is unsat-

isfactory for several obvious reasons. First, it is inefficient because some current is uselessly

heating up the non-load resistor. A second reason is that the battery will drain and therefore

vo will decrease too.

2. Switched mode power supply. As a second attempt at a solution, let us try a switching circuit

as in Figure 5.18. The switch S, typically a MOSFET, opens and closes periodically according

to a duty cycle, as follows. Let T be the period in seconds. The time axis is divided into

intervals of width T :

1

The output voltage, vo , is written lower case because eventually it’s not going to be perfectly constant.

CHAPTER 5. INTRODUCTION TO CLASSICAL CONTROL DESIGN 131

vo (t) DT

Vs

T t

Square-wave voltage

Figure 5.19: Square-wave voltage.

S L

+ +

Vs vo

Rl

Switched

Figure 5.20: regulator

Switched with a filter

regulator with a filter.

Over the interval [0, T ), the switch is closed for the subinterval [0, DT ) and then open for

[DT, T ), where D, the duty cycle, is a number between 0 and 1. Likewise, for every other

interval. The duty cycle will have to be adjusted for each interval, but for now let’s suppose

it is constant. The idea in the circuit is to choose D to get the desired regulated value of

vo . For example, if we want vo to be half the value of Vs , we choose D = 1/2. In this case

vo (t) would look as shown in Figure 5.19. Clearly, the average value of vo (t) is correct, but

vo (t) is far from being constant. How about efficiency? Over the interval [0, DT ), S is closed

and the current flowing is Vs /Rl . The input and output powers are equal. Over the interval

[DT, T ), S is open and the current flowing is 0. The input and output powers are again equal.

Therefore the efficiency is 100%. However we have not accounted for the power required to

activate the switches.

3. Inclusion of a filter. Having such large variations in vo is of course unacceptable. And this

suggests we need a circuit to filter out the variations. Let us try adding an inductor as in

Figure 5.20. This circuit is equivalent to the one in Figure 5.21 where the input is as shown

in Figure 5.22. A square wave into a circuit can be studied using Fourier series, which we

proceed to do.

+ +

v1 vo

Rl

Figure 5.21: Square-wave input voltage.

CHAPTER 5. INTRODUCTION TO CLASSICAL CONTROL DESIGN 132

v1 (t) DT

Vs

T t

Graph of input

4. Our goal now is to see quantitatively how the filter affects the ripple in the output voltage.

Since v1 (t) is periodic, so is vo (t) in steady state. The transfer function from v1 to vo , by the

voltage divider rule for impedances, equals

Rl

H(s) = .

Ls + Rl

The DC gain equals 1, i.e., H(0) = 1. Thus the average of vo (t) equals the average of v1 (t),

namely, DVs . The sinusoidal basis functions of period T are

1

wk (t) = √ ej2πkt/T , k = 0, ±1, ±2, . . . .

T

√

The scalar factor 1/ T is to normalize the sinusoid. Then the Fourier series of the input is

X

v1 (t) = ak wk (t),

k

Z T Z DT

1 1

ak = √ v1 (t)e−j2πkt/T dt = √ Vs e−j2πkt/T dt.

T 0 T 0

√

a0 = T Vs D.

a0 w0 (t) = Vs D,

√ 1 − e−j2πD

a1 = T Vs

j2π

and so the k = 1 term is

1 − e−j2πD j2πt/T

a1 w1 (t) = Vs e ,

j2π

whose magnitude is

1 − e−j2πD j2πt/T Vs

Vs e = sin(πD).

j2π π

CHAPTER 5. INTRODUCTION TO CLASSICAL CONTROL DESIGN 133

S L

+ +

Vs vo

C Rl

Buck step-down converter.

converter

This is maximum when D = 1/2, for which the magnitude is Vs /π. Passing this Fourier series

through the circuit gives

X

vo (t) = bk wk (t), bk = H(j2πk/T )ak .

k

b0 w0 (t) = Vs D,

Rl T √ 1 − e−j2πD

b1 = H(j2π/T )a1 = T Vs

j2πL + Rl T j2π

and then it turns out that

R l T Vs Vs Rl T

|b1 w1 (t)| = sin(πD) ≤ .

j2πL + Rl T π

2π 2 L

Clearly, this upper bound can be made arbitrarily small either by making T small enough or

by making the time constant L/Rl large enough.

5. In conclusion, the average output voltage vo (t) equals DVs , duty cycle × input voltage, and

there’s a ripple whose first harmonic can be made arbitrarily small by suitable design of the

switching frequency or the circuit time constant.

6. In practice, the circuit also has a capacitor, as in Figure 5.23. This is called a DC-DC buck

converter.

7. Left out of the discussion so far is that in reality Vs is not constant—the battery drains—and

Rl is not a fully realistic model for a load. In practice a controller is designed in a feedback

loop from vo to switch S. A battery drains fairly slowly, so it is reasonable to assume Vs is

constant and let the controller make adjustments accordingly. As for the load, a more realistic

model includes a current source, reflecting the fact that the load draws current.

8. In practice, the controller can be either analog or digital. Consider analog control. The block

diagram is Figure 5.24. The controller’s job is to regulate the voltage vo , ideally making

vo = Vref , by adjusting the duty cycle; the load current il is a disturbance.

CHAPTER 5. INTRODUCTION TO CLASSICAL CONTROL DESIGN 134

+ +

Vs vo il

C Rl

controller

Vref

Analog controller for voltage

for voltage regulation.

regulation

9. Let v1 denote the square-wave input voltage to the circuit. Also, let iL denote the inductor

current. Then Kirchhoff’s current law at the upper node gives

vo dvo

iL = il + +C

Rl dt

and Kirchhoff’s voltage law gives

diL

L + vo = v1 .

dt

Define the state x = (vo , iL ). Then the preceding two equations can be written in state-space

form as

ẋ = Ax + B1 il + B2 v1 (5.5)

vo = Cx, (5.6)

where

1 1

1

− Rl C

− 0

C

, B1 = C , B2 =

A=

1 1 , C = 1 0 .

− 0 0 L

L

Thus the transfer function from il to vo is

1

Cs

C(sI − A)−1 B1 = − 1 1

s2 + Rl C s + LC

and from v1 to vo is

1

C(sI − A)−1 B2 = 1

C

1 .

s2 + Rl C s + LC

CHAPTER 5. INTRODUCTION TO CLASSICAL CONTROL DESIGN 135

LTI controller vo

v1 d

PWM

Vref

FigurePulse-width

5.25: Pulsemodulator

width modulator.

v1 (t)

d(t)

sawtooth

FigureAnalog

5.26:form

Analog form of PWM.

of PWM

10. Let us emphasize that the system with inputs (v1 , il ) and output vo is linear time-invariant.

But v1 itself depends on the duty cycle. We now allow the duty cycle to vary from period

to period. The controller is made of two parts: a linear time-invariant (LTI) circuit and

a nonlinear pulse-width modulator (PWM); see Figure 5.25. The PWM works like this:

The continuous-time signal d(t), which is the output of the LTI controller, is compared to a

sawtooth waveform of period T , producing v1 (t) as shown in Figure 5.26. It is assumed that

d(t) stays within the range [0, 1], and the slope of the sawtooth is 1/T . We have in this way

arrived at the block diagram of the control system: The plant block and notation refer to

the state-space equations (5.5), (5.6). Every block here is linear time-invariant except for the

PWM.

11. Linearization. The task now is to linearize the plant. But the system does not have an

equilibrium, that is, a state where all signals are constant. This is because by its very nature

v1 (t) cannot be constant because it is generated by a switch. What we should do is linearize

il

A B1 B2 vo

C 0 0

v1 LTI controller

d

PWM

Vref

Block

Figure diagram

5.27: of analog

Block diagramcontrol systemsystem.

of control

CHAPTER 5. INTRODUCTION TO CLASSICAL CONTROL DESIGN 136

12. Current practice is not to linearize exactly, but to approximate and linearize in an ad hoc

manner. Consider the figure titled “Block diagram of analog control system.” To develop a

small-signal model, assume the controller signal d(t) equals a constant, d(t) = D, the nominal

duty cycle about which we shall linearize, assume the load current il (t) equals zero, and assume

the feedback system is in steady state; of course, for this to be legitimate the feedback system

has to be stable. Then v1 (t) is periodic, and its average value equals DVs . The DC gain from

v1 to vo equals 1, and so the average value of vo (t) equals DVs too. Assume D is chosen so

that the average regulation error equals 0, i.e., DVs = Vref . Now let d(t) be perturbed away

from the value D:

d(t) = D + ∆d(t).

Assume the switching frequency is sufficiently high that ∆d(t) can be approximated by a

constant for any switching period [kT, (k + 1)T ). Looking at the figure titled “Analog form of

PWM,” we see that, over the period [kT, (k + 1)T ), the average of v1 (t) equals Vs d(t). Thus,

without a very strong justification we approximate v1 (t) via

Define v̄1 = DVs and ∆v1 (t) = Vs ∆d(t). Likewise we write x, il , and vo as constants plus

perturbations:

x = x̄ + ∆x

il = 0 + ∆il

vo = v̄o + ∆vo .

With these approximations, the steady state equations of the plant become

v̄o = DVs

˙ = A∆x + B1 ∆il + B2 Vs ∆d

∆x

∆vo = C∆x.

The block diagram of this small-signal linear system is shown in Figure 5.28.

13. Design. The design of a buck converter is quite involved, but engineering firms have worked

out recipes in equation form for the sizing of circuit components. We are concerned only with

the feedback controller design, so we start with given parameter values. The following are

typical:

CHAPTER 5. INTRODUCTION TO CLASSICAL CONTROL DESIGN 137

iL

A B1 B2 Vs

C 0 0

vo

LTI controller

d 0

Bode Diagram

20

0

Magnitude (dB)

−20

−40

−60

−80

0

−45

Phase (deg)

−90

−135

−180

4 5 6 7

10 10 10 10

Frequency (rad/s)

Vs 8V

L 2 µH

C 10 µF

Rl 3.3 Ω

D 0.45

1/T 1 mHz

The transfer function from ∆d to ∆vo is second order with the Bode plot in Figure 5.29 If the

loop were closed with the controller transfer function of 1, the phase margin would be only

11◦ .

14. Typical closed-loop specifications involve the recovery to a step demand in load current. For

a step disturbance in ∆il , the output voltage ∆vo will droop. The droop should be limited

in amplitude and duration, while the duty cycle remains within the range from 0 to 1. The

nominal value of load voltage is Vs D = 3.6 V. Therefore the nominal load current is Vs D/Rl =

1.09 A. For our tests, we used a step of magnitude 20% of this nominal value. Shown in

Figure 5.30 is the test response of ∆vo (t) for the unity controller. The response is very

CHAPTER 5. INTRODUCTION TO CLASSICAL CONTROL DESIGN 138

0.06

0.04

0.02

−0.02

−0.04

−0.06

−0.08

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

−5

x 10

Figure 5.30: Step response from ∆iL to ∆vo for unity controller.

oscillatory, with a long settling time. The problem with this plant is it is very lightly damped.

The damping ratio is only 0.07. The sensible and simple solution is to increase the damping

by using a derivative controller. The controller K(s) = 7 × 10−6 s increases the damping to 0.8

without changing the natural frequency. Such a controller causes difficulty in computer-aided

design because it doesn’t have a state-space model. The obvious fix is to include a fast pole:

7 × 10−6 s

K(s) = .

10−10 s + 1

The resulting test response in output voltage is in Figure 5.31. Moreover, the duty cycle

stayed within the required range. Finally, we emphasize that the design must be tested by a

full-scale Simulink model.

The MATLAB program for this example is given below.

% Circuit values

clear

V_s=8;

L=2*10^(-6);

C=10^(-5);

R_l=3.3;

D=.45;

T=10^(-6);

B1=[-1/C ;0];

CHAPTER 5. INTRODUCTION TO CLASSICAL CONTROL DESIGN 139

0.005

10 switching cycles

0

−0.005

−0.01

V

−0.015

−0.02

−0.025

−0.03

−0.035

−0.04

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

−5

x 10

seconds

Figure 5.31: Step response from ∆iL to ∆vo for designed controller.

B2=[0;1/L];

CC=[1 0];

s=tf(’s’);

[num,den]=ss2tf(A,B2,CC,0);

P=tf(num,den);

[num,den]=ss2tf(A,B1,CC,0);

P1=tf(num,den);

clf

% bode(P)

% Specs:

% 1. Nominal v_o is V_sD. Thus nominal current is v_sD/R_l.

% Say the step in \Delta {i}_l is 20% of this.

% 2: Number of cycles to recover.

% 3. d must remain between 0 and 1.

% C is used for the capacitor.

% The problem is P is underdamped: zeta = 0.07.

% Therefore add derivative control to make zeta 0.8.

CHAPTER 5. INTRODUCTION TO CLASSICAL CONTROL DESIGN 140

K=7*10^(-6)*s/(10^(-10)*s+1);

PK=P*K;

Q1=P1/(1+PK); % transfer function from load current to output voltage

Q1=minreal(Q1); % pole-zero cancellation

Q2=minreal(Q2);

clf

% return

t=0:10^(-7):5*10^(-5);

v=0.2*(V_s*D/R_l)*step(Q1,t); % output voltage

plot(t,v)

return

clf

d=D+0.2*(V_s*D/R_l)*step(Q2,t); % duty cycle

plot(t,d)

5.6 Problems

1. Take P (s) = 0.1/(s2 + 0.7s + 1). Design a lag compensator C(s) to reduce the DC gain from r

to e and to increase the phase margin. Include all Bode plots, together with closed-loop step

responses.

2. For the plant P (s) = 1/s2 design a lead compensator to get a gain crossover frequency of

10 rad/s. Make the phase margin as large as possible. Include all Bode plots, together with

closed-loop step responses.

3. For the plant P (s) = 1/(s + 1) design a PID compensator to get a gain crossover frequency

of 2 rad/s and a phase margin as large as possible. Include all Bode plots, together with

closed-loop step responses.

1

4. The plant transfer function is P (s) = and the controller transfer function C(s) is the

s(s + 1)

product KC1 (s). Design a gain K and a lag compensator C1 (s) to achieve a phase margin of

40◦ and a steady-state tracking error of 5% for a unit ramp input r.

5. Consider the plant P (s) = 1/(τ s + 1), a simple first order lag with time constant τ and unity

DC gain. Let us say that τ = 10 and that this represents a large time constant, i.e., this plant

is sluggish. Mathematically, there is no obstacle in speeding this plant up without limit. For

example, take

τs + 1

C(s) = K .

s

CHAPTER 5. INTRODUCTION TO CLASSICAL CONTROL DESIGN 141

P (s)C(s) 1

= 1 .

1 + P (s)C(s) Ks +1

This also has unity DC gain but its time constant is 1/K, which can be arbitrarily small.

What’s the catch, or can we really make a Boeing 747 fly like a hummingbird?

## Molto più che documenti.

Scopri tutto ciò che Scribd ha da offrire, inclusi libri e audiolibri dei maggiori editori.

Annulla in qualsiasi momento.