Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Preface iii
1. Introduction 1
2. Modeling Dynamic Systems 17
3. Analysis Methods for Dynamic Systems 75
4. Analog Control System Performance 141
5. Analog Control System Design 199
6. Analog Control System Components 279
7. Digital Control Systems 311
8. Digital Control System Performance 343
9. Digital Control System Design 365
10. Digital Control System Components 399
11. Advanced Design Techniques and Controllers 433
12. Applied Control Methods for Fluid Power Systems 495
vii
ISBN: 0-8247-0661-7
Headquarters
Marcel Dekker, Inc.
270 Madison Avenue, New York, NY 10016
tel: 212-696-9000; fax: 212-685-4540
The publisher offers discounts on this book when ordered in bulk quantities. For more
information, write to Special Sales/Professional Marketing at the headquarters address
above.
Neither this book nor any part may be reproduced or transmitted in any form or by any
means, electronic or mechanical, including photocopying, microfilming, and recording, or
by any information storage and retrieval system, without permission in writing from the
publisher.
This book introduces the theory, design, and implementation of control systems. Its
primary goal is to teach students and engineers how to effectively design and imple-
ment control systems for a variety of dynamic systems.
The book is geared mainly toward senior engineering students who have had
courses in differential equations and dynamic systems modeling and an introduction
to matrix methods. The ideal background would also include some programming
courses, experience with complex variables and frequency domain techniques, and
knowledge of circuits. For students new to or out of practice with these concepts and
techniques, sufficient introductory material is presented to facilitate an understand-
ing of how they work. In addition, because of the books thorough treatment of the
practical aspects of programming and implementing various controllers, many engi-
neers (from various backgrounds) should find it a valuable resource for designing
and implementing control systems while in the workplace.
The material herein has been chosen for, developed for, and tested on senior-
level undergraduate students, graduate students, and practicing engineers attending
continuing-education seminars. Since many engineers take only one or, at most, two
courses in control systems as undergraduate students, an effort has been made to
summarize the fields most important ideas, skills, tools, and methods, many of
which could be covered in one semester or two quarters. Accordingly, chapters on
digital controllers are included and can be used in undergraduate courses, providing
students with the skills to effectively design and interact with microprocessor-based
systems. These systems see widespread use in industry, and related skills are becom-
ing increasingly important for all engineers. Students who hope to pursue graduate
studies should find that the text provides sufficient background theory to allow an
easy transition into graduate-level courses.
Throughout the text, an effort is made to use various computational tools and
explain how they relate to the design of dynamic control systems. Many of the
problems presented are designed around the various computer methods required
to solve them. For example, various numerical integration algorithms are presented
in the discussion of first-order state equations. For many problems, Matlab1 is used
iii
iv Preface
to show a comparative computer-based solution, often after the solution has been
developed manually. Appendix C provides a listing of common Matlab commands.
CHAPTER SUMMARIES
Each chapter begins with a list of its major concepts and goals. This list should prove
useful in highlighting key material for both the student and the instructor. An
introduction briefly describes the topics, giving the reader a big picture view of
the chapter. After these concepts are elaborated in the body of the chapter, a pro-
blem section provides an opportunity for reinforcing or testing important concepts.
Chapter 1 is an introduction to automatic control systems. The historical high-
lights of the field are summarized, mainly through a look at the pioneers who have
had the greatest impact on control system theory. The beginning of the chapter also
details the advancement of modern control theories (which resulted mainly from the
development of feasible microprocessors) and leads directly into the next two sec-
tions, which compare analog with digital and classical with modern systems. Chapter
1 concludes with an examination of various applications of control systems; these
have been chosen to show the diverse products that use controllers and the various
controllers used to make a design become a reality.
Chapter 2 summarizes modeling techniques, relating common techniques to the
representations used in designing control systems. The approach used here is differ-
ent from that of most textbooks, in that the elements common to all dynamic
systems are emphasized throughout the chapter. The general progression of this
summary is from differential equations to block diagrams to state-space equations,
which prefigures the order used in the discussion of techniques and tools for design-
ing control systems. Also included here is a subsection illustrating the limitations and
proper use of linearization. Newtonian, energy, and power flow modeling methods
are then summarized as means for obtaining the dynamic models of various systems.
Inductive, capacitive, and resistive elements are used for all the systems, with empha-
sis placed on overcoming the commonly held idea that modeling an electrical circuit
is vastly different from modeling a mechanical system. Finally, the chapter presents
bond graphs as an alternative method offering many advantages for modeling large
systems that encompass several smaller systems. Aside from its conceptual impor-
tance here, a bond graph is also an excellent tool for understanding how many
higher-level computer modeling programs are developed and thus for avoiding fun-
damental modeling mistakes when using these programs.
Chapter 3 develops some of the concepts of Chapter 2 by presenting the tech-
niques and tools required to analyze the resulting models. These tools include differ-
ential equations in the time domain; step responses for first- and second-order
systems; Laplace transforms, which are used to enter the s-domain and construct
block diagrams; and the basic block diagram blocks, which are used for developing
Bode plots. The chapter concludes by comparing state-space methods with the
analysis methods used earlier in the text.
Chapter 4 introduces the reader to closed-loop feedback control systems and
develops the common criteria by which they can be evaluated. Topics include open-
loop vs. closed-loop characteristics, effects of disturbance inputs, steady-state errors,
transient response characteristics, and stability analysis techniques. The goal of the
Preface v
chapter is to introduce the tools and terms commonly used when designing and
evaluating a controllers performance.
Chapter 5 examines the common methods used to design analog control sys-
tems. In each section, root locus and frequency domain techniques are used to design
the controllers being studied. Basic controller typessuch as proportional-integral-
derivative (PID), phase-lag, and phase-leadare described in terms of characteris-
tics, guidelines, and applications. Included is a description of on-site tuning methods
for PID controllers. Pole placement techniques (including gain matrices) are then
introduced as an approach to designing state-space controllers.
Chapter 6 completes the development of the analog control section by describ-
ing common components and how they are used in constructing real control systems.
Basic op-amp circuits, transducers, actuators, and amplifiers are described, with
examples (including linear and rotary types) for each category. The focus here is
not only on solving text problems but also on ensuring that the controller in
question can be successfully implemented.
Chapter 7 brings the reader into the domain of digital control systems. A
cataloging of the various examples of digital controllers serves to demonstrate
their prevalence and the growing importance of digital control theory. Common
configurations and components of the controllers noted in these examples are then
summarized. Next, the common design methods for analog and digital controllers
are compared. If a student with a background in analog controls begins the text here,
it should help to bridge the gap between the two types of controllers. The chapter
concludes by examining the effects of sampling and introducing the z-transform as a
tool for designing digital control systems.
Chapter 8 is similar to Chapter 4, but it applies performance characteristics to
digital control systems. Open- and closed-loop characteristics, disturbance effects,
steady-state errors, and stability are again examined, but this time taking into
account sample time and discrete signal effects.
Chapter 9, like Chapter 5, focuses on PID, phase-lag, and phase-lead control-
lers; in addition, it presents direct design methods applicable to digital controllers.
Controller design methods include developing the appropriate difference equations
needed to enter into the implementation stage. Also included is a discussion of the
effects of sample time on system stability.
Chapter 10 concludes the digital section by presenting the common compo-
nents used in implementing digital controllers. Computers, microcontrollers, and
programmable logic controllers (PLCs) are presented as alternatives. Methods for
programming each type are also discussed, and a connection is drawn between the
algorithms developed in the previous chapter and various hardware and software
packages. Digital transducers, actuators, and amplifiers are examined relative to
their role in implementing the controllers designed in the previous chapter. The
chapter concludes with a discussion of pulse-width modulation, its advantages and
disadvantages, and its common applications.
Chapter 11 is an introduction to advanced control strategies. It includes a short
section illustrating the main characteristics and uses of various controllers, including
feedforward, multivariable, adaptive, and nonlinear types. For each controller, suf-
ficient description is provided to convey the basic concepts and motivate further
study; some are described in greater detail, enabling the reader to implement them
as advanced controllers.
vi Preface
ACKNOWLEDGMENTS
It is perilous to begin listing the individuals and organizations that have influenced
this work, since I will undoubtedly overlook many valuable contributors. Several,
however, cannot go without mention. From the start, my parents taught me the
values of God, family, friendship, honesty, and hard work, which have remained
with me to this day. I am indebted to my mother, especially for her unending
devotion to family, and to my father, for living out the old adage An honest
days work for an honest days pay while demonstrating an uncanny ability to
keep machines running well past their useful life.
Numerous teachers, from elementary to graduate levels, have had a part instil-
ling in me the joy of teaching and helping othersif only I had recognized it at the
time! Coaches Brian Diemer and Al Hoekstra taught me the values of setting goals,
hard work, friendship, and teamwork. Professors Beachley, Fronczak, and Lorenz,
along with my fellow grad students at the University of WisconsinMadison, were
instrumental in a variety of ways, both academic and personal. The faculty and staff
at the Milwaukee School of Engineering have been a joy to work with and have spent
many hours helping and encouraging me. In particular, Professors Brauer, Ficken,
Labus, and Tran, and the staff at the Fluid Power Institute have been important to
me on a personal level. The staff at Marcel Dekker, Inc., has been very helpful in
leading me through the authorial process for the first time.
Finallysaving the most deserving until the endI would like to express my
gratitude to my wife, Kim, and my children, Rebekah, Matthew, and Rachel. Being
married to an engineer (especially one writing a book) is not the easiest task, and
Kim provides the balance, perspective, and strength necessary to make our house a
home. Finally, I am thankful for the pure joy I feel when I open the door after a long
day and the children yell out, Daddy, youre home! Thank you, Lord.
1.1 OBJECTIVES
Provide motivation for developing skills as a controls engineer.
Develop an appreciation of the previous work and history of automatic
controls.
Introduce terminology associated with the design of control systems.
Introduce common controller configurations and components.
Present several examples of controllers available for common applications.
1.2 INTRODUCTION
Automatic control systems are implemented virtually everywhere, from work to
play, from homes to vehicles, from serious applications to frivolous applications.
Engineers having the necessary skills to design and implement automatic controllers
will create new and enhanced products, changing the way people live. Controllers are
finding their way into every aspect of our lives. From toasting our bread and driving
to work to riding the train and traveling to the moon, control theory has been
applied in an effort to improve the quality of life. Control engineers may properly
be termed system engineers, since it is a system that must be controlled. These
systems may be a hard disk read head on your computer, your CD player laser
position, your vehicle (many systems), a factory production process, inventory con-
trol, or even the economy. Good engineers, therefore, must understand the modeling
of systems. Modeling might include aeronautical, chemical, mechanical, environ-
mental, civil, electrical, business, societal, biological, and political systems, or pos-
sibly a combination of these. It is an exciting field filled with many opportunities. For
maximum effectiveness, control engineers should understand the similarities (laws of
physics, etc.) inherent in all physical systems. This text seeks to provide a cohesive
approach to modeling many different dynamic systems.
Almost all control systems share a common configuration of basic compo-
nents. A closed-loop single inputsingle output (SISO) system, as shown in Figure
1, is an example of the basic components commonly required when designing control
systems. This may be modified to include items like disturbance inputs, external
inputs (i.e., wind, load, supply pressure), and intermediate physical system variables.
1
2 Chapter 1
The concept of a control system is quite simple: to make the output of the system
equal to the input (command) to the system. In many products we find servo- as a
prefix describing a particular system (servomechanism, servomotor, servovalve, etc.).
The prefix servo- is derived from the Latin word servos, or slave/servant. The
output in this case is a slave and follows the input.
The command may be electrical or mechanical. For electrical signals, op-amps
or microprocessors are commonly used to determine the error (or perform as the
summing junction in terms of the block diagram). In a mechanical system, a lever
might be used to determine the error input to the controller. As we will see, the
controller itself may take many forms. Although electronics are the becoming the
primary components, physical components can also be used to develop proportional-
integral-derivative (PID) controllers. An example is sometimes seen in pneumatic
systems where bellows can be used as proportional and integral actions and flow
valves as derivative actions.
The advantages of electronics, however, are numerous. Electronic controllers
are cheaper, more flexible, and capable of discontinuous and adapting algorithms.
Todays microprocessors are capable of running multiple controllers and have algo-
rithms that can be updated simply by reprogramming the chip. The amplifier and
actuator are critical components to select properly. They are prone to saturation and
failure if not sized properly. The physical system may be a mathematical model
during the design phase, but ultimately the actuator must be capable of producing
some input into the physical device that has a direct effect on the desired output.
Other than simple devices with many inherent assumptions, the physical system can
seldom be represented by one simple block or equation. Finally, a sensor must be
available that is capable of measuring the desired output. It is difficult to control a
variable that cannot be measured. Indirect control through observers adds complex-
ity and limits performance. Sensor development in many ways is the primary tech-
nology enabling advanced controllers. Additionally, the sensors must be capable of
enduring the environment in which it must be placed.
To translate this into something we all are familiar with, lets modify the
general block diagram to represent a cruise control system found on almost all
automobiles. This is shown in Figure 2. When you decide to activate your cruise
control, you accelerate the vehicle to its desired operating speed and press the set
switch, which in turn signals to the controller that the current voltage is the level at
which you wish to operate. The controller begins determining the error by compar-
ing the set point voltage with the feedback voltage from the speed transducer. The
4 Chapter 1
providing a constant flow through an outlet (fixed orifice). This constant flow was
accumulated in a second tank as a measure of time. These water clocks were used
until mechanical clocks arrived in the fourteenth century. Although commonly clas-
sified as control systems, designs during this period were intuitively based and math-
ematical/analytical techniques had yet to be applied to solving more complex
problems.
Two things happened late in the eighteenth century that would turn out to be
of critical significance when combined in the next century. First, in 1788 James Watt
(17361819) designed the centrifugal fly ball governor for the speed control of a
steam engine. The relatively simple but very effective device used centrifugal forces
to move rotating masses outward, thereby causing the steam valve to close, resulting
in a constant engine speed. Although earlier speed and pressure regulators were
developed [windmills in Britain, 1745; flow of grain in mills, sixteenth century;
temperature control of furnaces by Cornelis J. Drebbel (15721634) of Holland,
seventeenth century; and pressure regulators for steam engines, 1707], Watts gov-
ernor was externally visible, and it became well known throughout Europe, espe-
cially in the engineering discipline. Earlier steam engines were regulated by hand and
difficult to use in the developing industries, and the start of the Industrial Revolution
is commonly attributed to Watts fly ball governor. Second, in and near the eight-
eenth century, the mathematical tools required for analyzing control systems were
developed. Building on the earlier development of differential equations by Isaac
Newton (16421727) and Gottfried Leibniz (16461716) in the late seventeenth and
early eighteenth centuries, Joseph Lagrange (17361813) began to use differential
equations to model and analyze dynamic systems during the time that Watt devel-
oped his fly ball governor. Lagranges work was further developed by Sir William
Hamilton (18051865) in the nineteenth century.
The significant combination of these two events came in the nineteenth century
when George Airy (18011892), professor at Cambridge and Royal Astronomer at
Greenwich Observatory, built a speed control unit for a telescope to compensate for
the rotation of the earth. Airy documented the capability of unstable motion using
feedback in his paper On the Regulator of the Clock-work for Effecting Uniform
Movement of Equatorials (1840). After Airy, James Maxwell (18311879) system-
atically analyzed the stability of a governor resembling Watts governor. He pub-
lished a mathematical treatment On Governors in the Proceedings of the Royal
Society (1868) in which he linearized the differential equations of motion, found the
characteristic equation, and demonstrated that the system is stable if the roots of the
characteristic equation have a negative real component (see Sec. 3.4.3.1). This is
commonly regarded as the founding work in the field of control theory.
From here the mathematical theory of feedback was developed by names still
associated with the field today. Once Maxwell described the characteristic equation,
Edward Routh (18311907) developed a numerical technique for determining system
stability using the characteristic equation. Interestingly, Routh and Maxwell over-
lapped at Cambridge, both beginning at Peterhouse when shortly after Rouths
arrival Maxwell was advised to transfer to Trinity because of Routh being his
equal in mathematics. Routh was Senior Wrangler (highest academic marks),
whereas Maxwell was Second Wrangler (second highest academic marks). At
approximately the same time in Germany and unaware of Rouths work, Adolf
Hurwitz (18591919), upon a request from Aurel Stodola (18591952), also solved
Introduction 5
and published the method by which system stability could be determined without
solving the differential equations. Today this method is commonly called the Routh
Hurwitz stability criterion (see Sec. 4.4.1). Finally, Aleksandr Lyapunov (18571918)
presented Lyapunovs methods in 1899 as a method for determining stability of
ordinary differential equations. Relative to the control of dynamic systems, non-
linear in particular, the importance of his work on differential equations, potential
theory, stability of systems, and probability theory is only now being realized.
With the foundation formed, the twentieth century has seen the most explosive
growth in the application and further development of feedback control systems.
Three factors helped to fuel this growth: the development of the telephone, World
War II, and microprocessors. Around 1922, Russian Nicholas Minorsky (1885
1970) analyzed and developed three mode controllers for automatic ship steering
systems. From his work the foundation for the common PID controller was laid.
Near the same time and driven largely by the development of the telephone, Harold
Black (18981983) invented the electronic feedback amplifier and demonstrated the
usefulness of negative feedback in amplifying the voice signal as required for travel-
ing the long distances over wire. Along with Harold Hazens (19011980) paper on
the theory of servomechanisms, this period marked a major increase in the interest
and study of automatic control theory. Blacks work was further built on by two
pioneers of the field, Hendrik Bode (19051982) and Harry Nyquist (18891976). In
1932, working at Bell Laboratories, Nyquist developed his stability criterion based
on the polar plot of a complex function. Shortly thereafter in 1938, Bode used
magnitude and phase frequency response plots and introduced the idea of gain
and phase stability margins (see Sec. 4.4.3). The impact of their work is evident in
the commonplace use of Nyquist and Bode plots when designing and analyzing
automatic control systems in the frequency domain.
The first large-scale application of control theory was during World War II, in
which feedback amplifier theory and PID control actions were combined to deal with
the new complexity of aircraft and radar systems. Although much of the work did
not surface until after the war, great advances were made in the control of industrial
processes and complex machines (airplanes, radar systems, artillery guidance). Soon
after the war, W. Evans (19201999) published his paper Graphical Analysis of
Control Systems (1948), which presented the techniques and rules for graphically
tracing the migrations of the roots of the characteristic equation. The root locus
method remains an important tool in control system design (see Sec. 4.4.2). At this
point in history, the root locus and frequency response techniques were incorporated
in general engineering curriculum, textbooks were written, and the general class of
techniques came to be known as classical control theory.
While classical control theory was maturing, work accomplished in the late
1800s (time domain differential equation techniques) was being revisited due to the
age of the computer. Lyapunovs work, combined with the capabilities of the com-
puter, led to his contribution becoming more fully realized. The incentive arose from
the need to effectively control nonlinear multiple inputmultiple output (MIMO)
systems. While classical techniques are very effective for linear time-invariant
(LTI) SISO systems, the complexity increases rapidly when the attempt is made to
apply these techniques to nonlinear time-variant, and/or MIMO output systems. The
computationally intensive but simple-to-program steps used in the time domain are
well adapted to these complex systems when coupled with microprocessors.
6 Chapter 1
we talk about setting the thermostat (controller) at a specific temperature with the
idea that the temperature inside the house will match the thermostat setting, even
though this is not the case since our furnace is either on or off. Since temperature is a
continuous (analog) signal, this is the intuitive approach. Even with digital control-
lers (or in the case of older thermostats a nonlinear electromechanical device) we
generally discuss the performance in terms of continuous signals and/or measure-
ments. True analog controllers have a continuous signal input and continuous signal
output. Any desired output level, at least in theory, is achievable. Many mechanical
devices (Watts steam engine governor, for example) and electrical devices (operation
amplifier feedback devices) fall into this category. Analog controllers are found in
many applications, and for LTI and SISO systems they have many advantages. They
are simple reliable devices and, in the case of purely mechanical feedback systems, do
not require additional support components (i.e., regulated voltage supply, etc.). The
common PID controller is easily constructed using analog devices (operational
amplifiers), and many control problems are satisfactorily solved using these control-
lers. The majority of controllers in use today are of the PID type (both analog and
digital). Perhaps as important to us, if we desire to pursue a career in control
systems, is the fact that someone can intuitively grasp many of the advanced non-
linear and/or digital control schemes with a secure grasp of analog control theory.
A digital controller, on the other hand, generally involves processing an analog
signal to enable the computer to effectively use it (digitization), and when the com-
puter is finished, it must in many cases convert the signal from its digital form back
to its native analog form. Each time this conversion takes place, another source of
error, another delay, and a loss of information occurs. As the speed of processors
and the resolution of converters increase, these issues are minimized. Some tech-
niques lend themselves quite readily to digital controllers as one or more of their
signals are digital from the beginning. For example, any device or actuator capable
of only on or off operation can simply use a single output port whose state is either
high or low (although amplifiers, protection devices, etc. are usually required). As
computers continue to get more powerful while prices still decline, digital controllers
will continue to make major inroads into all areas of life.
Digital controllers, while having some inherent disadvantages as mentioned
above, have many advantages. It is easy to perform complex nonlinear control
algorithms, a digital signal does not drift, advanced control techniques (fuzzy
logic, neural nets) can be implemented, economical systems are capable of many
inputs and outputs, friendly user interfaces are easily implemented, they have data
logging and remote troubleshooting capabilities, and since the program can be
changed, it is possible to update or enhance a controller with making any physical
adjustments. As this book endeavors to teach, building a good foundation of math-
ematical modeling and intuitive classical tools will enable control system designers to
move confidently to later sections and apply themselves to digital and advanced
control system design.
used in analyzing different control strategies. System representations like state space
are presented along side of transfer functions in this text to develop the skills leading
to modern control theory techniques. State space, while useful for LTI SISO systems,
lends itself more readily to topics included in modern control theory. Modern con-
trol theory is a time-based approach more applicable to linear and nonlinear,
MIMO, time-invariant, or time-varying systems.
When we look at classical control theory and recognize its roots in feedback
amplifier design for telephones, it comes as no surprise to find the method based in
the frequency domain using complex variables. The names of those playing a pivotal
role have remained, and terms like Bode plots, Nichols plots, Nyquist plots, and
Laplace transforms are common. Classical control techniques have several advan-
tages. They are much more intuitive to understand and even allow for many of the
important calculations to be done graphically by hand. Once the basic terminology
and concepts are mastered, the jump to designing effective, robust, and achievable
designs is quite easy. Dealing primarily with transfer functions as the method to
describe physical system behavior, both open-loop and closed-loop systems are easily
analyzed. Systems are easily connected using block diagrams, and only the input/
output relationships of each system are important. It is also relatively easy to take
experimental data and accurately model the data using a transfer function. Once
transfer functions are developed, all of the tools like frequency plots and root locus
plots are straightforward and intuitive. The price at which this occurs is reflected in
the accompanying limitations. With some exceptions, classical techniques are suited
best for LTI SISO systems. It rapidly becomes more of a trial-and-error process and
less intuitive when nonlinear, time varying, or MIMO systems are considered. Even
so, techniques have been developed that allow these systems to be analyzed. Based
on its strengths and weaknesses, it remains an effective, and quite common, means of
introducing and developing the concept of automatic controller design.
Modern control theory has developed quickly with the advent of the micro-
processor. Whereas classical techniques can graphically be done by hand, modern
techniques require the processing capabilities of a computer for optimal results. As
systems become more complex, the advantages of modern control theory become
more evident. Being based in the time domain and when linearized, in matrix form,
implementing modern control theories is equally easy for MIMO as it is for SISO
systems. In terms of matrix algebra the operations are the same. As is true in other
matrix operations, programming effort remains almost the same even as system size
increases. The opposite effect is evident when doing matrix operations by hand.
Additional benefits are the adaptability to nonlinear systems using Lyapunov
theories and in determining the optimal control of the system.
Why not start then with modern control theory? First, the intuitive feel evident
in classical techniques is diminished in modern techniques. Instead of input/output
relationships, sets of matrices or first-order differential equations are used to describe
the system. Although the skills can be taught, the understanding and ramification of
different designs are less evident. It becomes more math and less design. Also,
although it is simple in theory to extend it to larger systems, in actual systems,
complete with modeling errors, noise, and disturbances, the performance may be
much less than expected. Classical techniques are inherently more robust. In using
matrices (preferred for computer programming) the system must generally be line-
arized and we end up back with the same problem inherent in classical techniques.
Introduction 9
Finally, the simplest designs often require the knowledge of each state (parameters
describing current system behavior, i.e., position and velocity of a mass), which for
larger systems is quite often not feasible, either because the necessary sensors do not
exist or the cost is prohibitive. In this case, observers are often required, the complex-
ity increases, and the modeling accuracy once again is very important.
The approach in this book, and most others, is to develop the classical tech-
niques and then move into modern techniques and digital controls. Modern control
theories and digital computers are a natural match, each requiring the other for
maximum effectiveness. To maintain a broad base of skills, both classical and mod-
ern control theories are extended into the digital controller realmclassical tech-
niques because that is where many of the current digital controllers have migrated
from and modern techniques because that is where many are beginning to come
from.
Phillips W. On the road to efficient transportation, engineers are in the drivers seat. ASME News,
December 1998.
10 Chapter 1
only, no proportionality), and provide voltage (05 Vdc) or current (420 mA)
commands to any compatible actuator. As such, when designing a control system
using a system like this, you must still provide the amplifiers and actuators (i.e.,
electronic valve driver card if an electrohydraulic valve is used). Advantages are as
follows: dedicated computer and data acquisition system not required, faster and
more consistent speeds due to a dedicated microprocessor, optically isolated inputs
for protection, supplied programming interfaces, and flexibility. Many controllers
are packaged with software and cables, allowing easy programming. Disadvantages
are cost, discrete signals that require more knowledge when troubleshooting, and
complexity when compared to other systems. With the progress being made with
microprocessors and supporting electronic components, this type of controller will
continue to expand its range.
For embedded high-volume controllers, it becomes likely that similar systems
(microprocessor, signal conditioning, interface devices, and memory) will be
designed into the main system board and integrated with other functions. In this
case, more extensive engineering is required to properly design, build, and use such
controllers. An example of such applications is the automobile, where vehicle elec-
tronic control modules are designed to perform many functions specific to one (or
several) vehicle production line. The volume is large enough and the application
specific enough to justify the extra development cost. It is still common for these
applications to be developed by third-party providers (original equipment manufac-
turers).
1.6.1.2 Application-Specific Controller Examples
For common applications it may be possible to have a choice of several different off-
the-shelf controllers. These may require as little as installation and tuning before
being operational. For small numbers, proof of concept testing or, when time is
12 Chapter 1
short, application specific controllers have several advantages. Primarily, the devel-
opment time for the application is already done (by others). Actuators, signal con-
ditioners, fault indicators, safety devices, and packaging concerns have already been
addressed. Perhaps the largest disadvantage, besides cost, is the loss of flexibility.
Unless our application closely mirrors the intended operation, and sometimes even
then, it may become more trouble to adapt and/or modify the controller to get
satisfactory performance. In other words, each application, even within the same
type (i.e., engine speed control), is a little different, and it is impossible for the
original designer to know all the applications in advance.
For examples of this controller type, let us look at two internal combustion
(IC) engine speed controllers. First, to illustrate how specific a controller might be,
let us examine the automatic engine controller shown in Figure 4. It does not include
the actual actuators and consists only of the electronics.
Looking at its abilities will illustrate how specific it is to one task, controlling
engines. This controller is capable of starting an engine and monitoring (and
shutting down if required) oil pressure, temperature, and engine speed. It also
can be used to preheat with glow plugs (diesels), close an air gate during over
speed conditions, and provide choke during cold starts. Remember that a sensor is
required for an action on any variable to occur. The module may accept engine
speed signals based on a magnetic pickup or an alternator output. The functions
included with the controller are common ones and saves the time required to
develop similar ones. The packaging is such that it can be mounted in a variety
of places. The main disadvantages are limited flexibility; if additional functions or
packaging options are required, cost; and the requirement that all sensors must be
capable with the unit.
Second, lets consider an example of a specific off-the-shelf controller that
includes an actuator(s). The control system shown in Figure 5 includes the speed
pickups (magnetic pickup), electronic controller, and actuator (proportional sole-
noid) as required for a typical installation.
Using this type of controller is as simple as providing a compatible voltage
signal proportional to the engine speed, connecting the linear proportional solenoid
to the throttle plate linkage, and tuning the controller. The controller may be pur-
chased as a PI or a PID. Which one is desired? As we will soon see, depending on
conditions, either one might be appropriate. Understanding the design and opera-
tion of control systems is important even in choosing and installing black box
systems. The advantages and disadvantages are essentially the same as the first
example in this section.
1.6.1.3 Programmable Logic Controllers (Modern)
PLCs may take many forms and no longer represent only a sequential controller
programmed using ladder logic. Modules may be purchased which might include
advanced microprocessor-based controllers such as adaptive algorithms, standard
PID controllers, or simple relays. Historically, PLCs handled large numbers of
digital I/Os to control a variety of processes, largely through relays, leading to
using the word logic in their description. Today, PLCs are usually distinguished
from other controllers through the following characteristics: rugged construction
to withstand vibrations and extreme temperatures, inclusion of most interfacing
components, and an easy programming method. Being modular, a PLC might
include several digital I/Os driving relays along with modules with an embedded
microcontroller, complete with analog I/O, current drivers, and pulse width mod-
ulation (PWM) or stepper motor drivers. An example micro-PLC, capable of more
than just digital I/O, is shown in Figure 6.
This type of PLC can be used in a variety of ways. It comes complete with
stepper motor drivers, PWM outputs, counter inputs, analog outputs, analog inputs,
256 internal relays, serial port for programming, and capable of accepting both
ladder logic and BASIC programs. Modern PLCs may be configured for large
numbers of inputs, outputs, signal conditioners, programming languages, and
speeds. They remain very common throughout a wide variety of applications.
1.6.1.4 Specific Product Example (Electrohydraulics)
Even more specific than a particular application are controllers designed for a
specific product. A common example of this is in the field of electrohydraulics.
Most electrically driven valves have separate options about the type of controllers
available. When the valve is purchased a choice must be made about the type of
controller, if any, desired. While it is not difficult to design and build an after-
14 Chapter 1
market controller, it is fairly difficult to get the performance and features com-
monly found on controllers designed for particular products. The disadvantages
are quite obvious in that the manufacturer determines all the types of mounting
styles, packaging styles, features, and speeds. In general, however, these disad-
vantages are minimized since each industry attempts to follow standards for
mounts, fittings, and power supply voltages, etc., and there is a good chance
that the support components are readily available. A distinct advantage is the
integration and range of features commonly found in these controllers. The elec-
trohydraulic example below illustrates this more fully. In addition, the manufac-
turer generally has an advantage in that the internal product specifications are
fully known.
The example valve driver/controller shown in Figure 7 is designed to interface
with several electrohydraulic valves. More common in controllers designed for spe-
cific products, the interfacing hardware, signal conditioners, amplifiers, and control-
ler algorithms are all integrally mounted on a single board. In addition, many
features are added which are specific only to the product it was designed for. In the
example shown, deadband compensation, ramp functions, linear variable displace-
ment transducer [LVDT] valve position feedback, dual solenoid PWM drivers, and
testing functions are all included on the single board. The driver card also includes
external and internal command signal interfaces, gain potentiometers for the on-
board PID controller, troubleshooting indicators (e.g., light emitting diodes [LEDs]),
and access to many extra internal signals.
Depending on the valve chosen, a specific card with the proper signals must be
chosen. The system is very specific to using one class of valves, as illustrated by the
dual solenoid drivers and LVDT signal conditioning for determining valve spool
position.
PROBLEMS
1.1 Label and describe the blocks and lines for a general controller block diagram
model, as given in Figure 8.
1.2 Describe the importance of sensors relative to the process of designing a con-
trol system.
1.3 Describe a common problem that may occur with amplifiers and actuators
when improperly selected. For the problem described, list a possible cause followed
by a possible solution.
1.4 For an automobile cruise control system, list the possible disturbances that the
control system may encounter while driving.
1.5 Choose one prominent person who played an important role in the history of
automatic controls. Find two additional sources and write a brief paragraph describ-
ing the most interesting results of your research.
1.6 Finish the phrases using either analog or digital.
a. Most physical signals are _____________ .
b. Earliest computers were _____________ .
c. For rejecting electrical noise, the preferred signal type is _____________ .
d. Signals exhibiting finite intermediate values are _____________ .
1.7 List two advantages and two disadvantages of classical control design
techniques.
1.8 List two advantages and two disadvantages of modern control design
techniques.
1.9 In several sentences, describe the significance of the microprocessor relative to
control systems presently in use.
2.1 OBJECTIVES
Present the common mathematical methods of representing physical
systems.
Develop the skills to use Newtonian physics to model common physical
systems.
Understand the use of energy concepts in developing physical system
models.
Introduce bond graphs as a capable tool for modeling complex dynamic
systems.
2.2 INTRODUCTION
Although some advanced controller methods attempt to overcome limited models,
there is no question that a good model is extremely beneficial when designing control
systems. The following methods are presented as different, but related, methods for
developing plant/component/system models. Accurate models are beneficial in simu-
lating and designing control systems, analyzing the effects of disturbances and para-
meter changes, and incorporating algorithms such as feed-forward control loops.
Adaptive controllers can be much more effective when the important model para-
meters are known.
As a minimum, the following sections should illustrate the commonality
between various engineering systems. Although units and constants may vary, elec-
trical, mechanical, thermal, liquid, hydraulic, and pneumatic systems all require the
same approach with respect to modeling. Certain components may be nonlinear in
one system and linear in another, but the equation formulation is identical. As a
result of this phenomenon, control system theory is very useful and capable of
controlling a wide variety of physical systems.
Most systems may be modeled by applying the following laws:
Conservation of mass;
Conservation of energy;
Conservation of charge;
Newtons laws of motion.
17
18 Chapter 2
dx d 2x
x0 x_ and x00 x
dt dt2
When a slash to the right of or a dot over a variable is given, time is assumed to
be the independent variable. The number of slashes or dots represents order of the
differential. Differential equations are generally obtained from physical laws describ-
ing the system process and may be classified according to several categories, as
illustrated in Table 1.
Another consideration depends on the number of unknowns that are involved.
If only a single function is to be found, then one equation is sufficient. If there are
ratio of the output to the input, or what is called a transfer function. Each line
representing a signal is unidirectional and designated by an arrow. Since each line
represents a variable in the system, usually a physical variable with associated units,
the blocks must contain the appropriate units relative to the input and output. For
example, a pressure input to a block representing an area upon which the pressure
acts requires the value of the block representing that area to be expressed in units
where the block output is expressed as a force (force pressure area). This
relationship where the block is the ratio of the output variable to the input variable
is shown in Figure 1.
Also shown in Figure 1 is a basic summing junction, or comparator. A sum-
ming junction either adds or subtracts the input variables to determine the value of
the output variable. As is true in basic addition and subtraction operations, the units
of the variable must be the same for all inputs and output of a summing junction. In
other words, it does not make sense to add voltages and currents or pressures and
forces. Any number of inputs may be used to determine the single output. Each input
should also be designated as an addition or subtraction using or symbols
near each line or inside the summing junction itself.
These two items allow us to construct and analyze almost every control system
that we are likely to encounter. The operations illustrated in the remaining sections
use the block and summing representations to graphically manipulate algebraic
equations. Each section will give the corresponding algebraic equations, although
in practice this is seldom done once we become familiar with the graphical opera-
tions.
2.3.2.2 Blocks in Series
A common simplification in block diagrams is combining blocks in series and repre-
senting them as a single block, as shown in Figure 2. Any number of blocks in series
may be combined as long as no branches are contained between any of the pairs of
blocks. A later section illustrates how to move branches and allow for the blocks to
be combined. Remembering that each individual block must correctly relate the units
of the input variable to the output variable, the new block formed by multiplying the
individual blocks will then relate the initial input to the final output. The simplified
blocks units are obtained by multiplying the units from each individual block.
2.3.2.3 Blocks in Parallel
It is also common to find block diagrams where several blocks are constructed in
parallel. Many controllers are first formed in this way to illustrate the effect of each
part of the controller. For example, the control actions for a proportional, integral,
derivative controller can be shown using three parallel blocks where the paths repre-
sent the proportional, integral, and derivative control effects. A simple system using
two blocks is shown in Figure 3. When combining two or more blocks in parallel, the
signs associated with each input variable must be accounted for. It is possible to have
several blocks subtracting and several blocks adding when forming the new simpli-
fied block.
2.3.2.4 Feedback Loops in Block Diagrams
Perhaps the most common block diagram operation when analyzing closed loop
control systems is the reduction of blocks in loops. Whereas the previous steps
resulted in new blocks formed by addition, subtraction, multiplication, and division,
when feedback loops are formed the structure of the system is changed. Later on
we will see how the denominator changes when the loop is closed, resulting in us
modifying the dynamics of the system. The basic steps used to simplify loops in
block diagrams are illustrated in Figure 4. Thus, we see that a new denominator is
formed that is a function of both the forward (left to right signals) path blocks and
the feedback (right to left signals) path blocks. When we design a controller we insert
a block that we can change, or tune, and thereby modify the behavior of our
physical system.
Even with multiple blocks in the forward and feedback loops, we can use the
general loop-closing rule to simplify the block diagram. The exception to the rule is
when the loop has any branches and/or summing junctions within the loop itself. The
goal is to then first rearrange the blocks in equivalent forms that will enable the loop
to be closed. Several helpful operations are given in the remaining sections.
2.3.2.5 Moving a Summing Junction in a Block Diagram
When loops contain summing junctions that prevent the loop from being simplified,
it is often possible to move the summing junction outside of the loop, as shown in
Figure 5. In general, summing junctions can be moved either ahead or back in the
block diagram, depending on the desired result. By writing the algebraic equations
for the block diagrams shown in Figure 5, we can verify that indeed the two block
diagrams are equivalent. In fact, whenever in doubt, it is helpful to write the equa-
tions as a means of checking the final result.
2.3.2.6 Moving a Pickoff Point in a Block Diagram
Finally, it may be useful to move pickoff points (branch points) to different locations
when attempting to simplify various block diagrams. This operation is shown in
Figure 6. Once again the included algebraic equations confirm that the block dia-
grams are equivalent, and in fact the algebraic operation and graphical operation are
mirror images of each other.
Using the operations described above and realizing that the block diagram is a
picture of an equation will allow us to reduce any block diagram with the basic
steps. By now the block diagram introduced at the beginning should be clearer. What
we will progress to in Section 2.6 is how to develop the actual contents of each block
that will allow us to design and simulate control systems.
In all these cases (and in those not listed) once two variables (along with their
first derivatives) are chosen, the remaining positions, velocities, and accelerations can
be defined relative to the chosen states. The examples listed above all include mea-
surable variables, although this is not a requirement, as stated above. There may be
advantages to choosing some sets of states as it is often possible to decouple inputs
and outputs, thus making control system design easier. This will be discussed more in
later sections. Since more states are generally available (i.e., meet the requirements,
as described above) a state space system of equations is not unique. All representa-
tions, however, will result in the same system response.
The goal is to develop n first order differential equations where n is the order of
the total system. The second order differential equation describing the common mass-
spring-damper system would become two first-order differential equations represent-
ing position and velocity of the mass. What about the acceleration? Having only the
position and velocity as states will suffice since the acceleration of the mass can be
determined from the current position and velocity of the mass (along with any imposed
inputs on the system). The position allows us to determine the net spring force and the
velocity to determine the net friction force. In general, each state variable must be
independent. Since the acceleration can be found as a function of the position and
velocity, it is dependent and does not meet the general criteria for a state variable.
The general form for a linear system of state equations is using vector and
matrix notation to define the constants of the states and inputs. The standard nota-
tion looks like the following:
dx=dt A x B u
and
yC xD u
x is the vector containing the state variables and u is the vector containing the inputs
to the system. A is an n n matrix containing the constants (linear systems) of the
state equations and B is a matrix containing the constants multiplying on the inputs.
The matrix C determines the desired outputs of the system based on the state vari-
ables. D is usually zero unless there are inputs directly connected to the outputs,
found in some feedforward control algorithms. The general size relationships
between the vectors and matrices are listed below.
Define
n as the number of states defined for the system (the system order);
m as the number of outputs for the system;
r as the number of inputs acting on the system.
Then
x is the state vector and has dimensions n 1 (and for dx/dt).
u is the input vector and has dimensions r 1.
y is the output vector and has dimensions m 1.
A is the system matrix and has dimensions n n.
B is the input matrix and has dimensions n r.
C is the output matrix and has dimensions m n.
D is the feedforward matrix and has dimensions n r.
Modeling Dynamic Systems 25
EXAMPLE 2.1
Mass-spring-damper system:
mx bx_ kx F
Since this is a second-order system, we would expect to need two states, result-
ing in a 2 2 system matrix, A. First, we need to define our state variables. This is
easily accomplished by choosing acceleration and velocity as the first state deriva-
tives, since they are both integrally related to position. Therefore, let x1 , the first
state variable, equal x, the position, and x2 , the second state variable, equal velocity,
or dx=dt. Using x as the dependent variable in the differential equation is a little
misleading since x1 in this case is equal to x but is not required to be so. Now we can
set up the following identities and equations:
x1 x position
x_ 1 x2 velocity
u F input
b k 1
x_ 2 x x x u acceleration
m 2 m 1 m
We have met the goal of having two first-order differential equations as func-
tions of the state variables and constants. Since this is linear we can also represent it
in matrix form.
2 3 2 3
"
x_ 1
# 0 1 "x # 0
1
4 k b5 4 1 5u
x_ 2 x2
m m m
If the position of the mass is the desired output, then the C and D matrices are
x
y 1 0 1 Cx Du
x2
26 Chapter 2
EXAMPLE 2.2
Applying the same procedure to the pendulum equation:
g
y sin y 0
l
x1 y angular position
u 0 input
x_ 1 x2 angular velocity
g
x_ 2 y sinx1 angular acceleration
l
With nonlinear equations this is the final form. If the matrix form is desired, the
equations must first be linearized as shown in the following section. Whether or not
they are written in matrix form, sets of first-order differential equations (regardless of
the number) are easily integrated numerically (i.e., Runge-Kutta) to simulate the
dynamic response of the systems. This is further explored in Chapter 3.
The linearized output, y^, is found by first determining the desired operating
point, evaluating the system output at that point, and determining the linear varia-
tions around that point for each variable affecting the system. It is important to
choose an operating point around where the system is actually expected to operate
since the linearized system model, and thus the resulting design also, can vary greatly
depending on what point is chosen. Once the operating point is chosen, the operating
offset, f x1o ; x2o ; . . . ; xno , is calculated. For some design procedures only the varia-
tions around the operating point are examined since they affect the system stability
and dynamic response. This will be explained further in the next example. Next, each
slope, or the linearized effect of each individual variable, is found by taking the
partial derivative with respect to each variable. Each partial derivative is evaluated at
the operating point and becomes a numerical constant multiplied by the deviation
Modeling Dynamic Systems 27
away from the operating point. When all constants are collected we are left with one
constant followed by a constant multiplied by each individual variable, thus resulting
in a linear equation of n variables. The actual procedure is therefore quite simple,
and most computer simulation packages include linearization routines to perform
this step for the end user. It is still up to the user, however, to linearize about the
correct point and to recognize the valid and usable range of the resulting linearized
equations.
Now let us examine the idea behind the linearization process a little more
thoroughly by beginning with a function (y) of one variable (x). If we were to plot
the function, we might end up with some curve as given in Figure 8. When the
derivative (partial derivative for functions of several variables) is taken of the
curve, the result is another function. We evaluate the derivative of the function at
the operating point to give us the slope of line representing our original nonlinear
function. Depending on the shape of the original curve, the usable linear range will
vary. At some point the original nonlinear curve is quite different than the linear
estimated value as shown in the plot.
It is clear in Figure 8 that a very small range of the actual function can be
approximated with a linear function (line). In fact, for the example in Figure 8, the
slope is even positive or negative depending on the operating point chosen. Many times
the actual functions are not as problematic and may be modeled linearly over a much
larger range. In all cases, it pays to understand where the linearized model is valid.
Let us now linearize the inverted pendulum state equation and determine a
usable range for the linear function.
EXAMPLE 2.3
Since sin(y) is our nonlinear function and y 0 the operating point (pendulum is
vertical), then a plot around this point will illustrate the linearization concept.
Performing the math first, we begin with the nonlinear state equation derived
earlier:
g
x_ 2 y f x1 sinx1
l
Vertical offset f 0:
g
y0 f 0 sin0 0
l
Partial derivative:
@y @f x1 g
cosx1
@x @x1 l
Slope at x1 0cosx1 ! 1:
@y @f 0 g
@x @x1 l
Resulting linear state equation:
g g
x_ 2 y x1 y
l l
Since the state equation is now linear, the system can be written using matrices
and vectors, resulting in the following state space linear matrices:
" 0 1 #
x_ 1 x1 0
g u
x_ 2 0 x2 0
l
x
y 1 0 1 Cx Du
x2
Common design techniques presented in future sections may now be used to
design and simulate the system. As a caution flag, many designs appear to work
well when simulated but fail when actually constructed due to misusing the linear-
ization region. Developing good models with the proper assumptions is key to
designing good performing robust control systems. To conclude this example, let
us examine the valid region for the linear inverted pendulum model. To do so,
let us plot the linear and actual function (for simplicity, let g=l 1). These plots
are given in Figure 9.
From the graph is it clear that between approximately 0:8 radians 45
degrees) the linear model is very close to the actual. The greater the deviation beyond
these boundaries, the greater will be the modeling error. The linearization of the sine
function, as demonstrated above, is also commonly referred to as the small angle
approximation.
EXAMPLE 2.4
It is also possible to visualize the linearization process for functions of two variables
by using surface plots. For this example the following function is examined:
yx1 ; x2 500 x1 6 x21 500 x2 x32
Although seldom done in practice, it is helpful for the purpose of understand-
ing what the linearization process does to plot the function in both the original
nonlinear and linearized forms. This was already done for the pendulum example
with the function only dependent on one variable. In this case, using a function
dependent on two variables, a surface is created when plotting the function. This
Modeling Dynamic Systems 29
is shown in Figure 10, where the function is plotted for 0 < x1 < 25 and 0 < x2 < 25
around an operating point chosen to be at x1 ; x2 10; 10.
To linearize the function the offset value and the two partial derivatives must
be calculated and evaluated at the operating point of x1 ; x2 10; 10.
Offset value:
yx1 ; x2 y10; 10 5000 600 5000 1000 8400
Slope in x1 direction:
Slope in x2 direction:
@y
500 3 x2 470
@x2 10;10
Linearized equation:
y^ 8400 380x1 10 470x2 10
The linearized equation is now the equation for a plane in three-dimensional
space and is the plane formed by the intersection of the two lines drawn in Figure 10.
The error between the nonlinear original equation and the linearized equation is the
difference between the plane formed by the two lines and the actual surface plot, as
illustrated in Figure 11. Although visualization becomes more difficult, the lineariza-
tion procedure remains straightforward for functions of more than two variables.
Each partial derivative simply relates the output to the change of the variable to
which the partial is taken.
It is common to only include the variations about the operating point when
linearizing system models since it facilitates easy incorporation into block diagrams
and other common system representations. In this case the constant values are
dropped and the system is examined for the amount of variation away from the
operating point. In the case of then simulating the system, the input and output
values are not absolute but only relative distances (or whatever the units may be)
from the operating point. This simplifies the equation studied in the previous exam-
ple to the following:
y^ 380 x1 470 x2 (about the point 10,10)
When this is done, the system characteristics (i.e., stability, transient response, etc.)
are not changed and the simulation only varies by the offset. Of course, if component
saturation models or similar characteristics are included, then the saturation values
must also reflect the change to variations around the operating point.
EXAMPLE 2.5
To conclude this section, let us linearize the common hydraulic valve orifice equation
to illustrate the procedure as more commonly implemented in practice. For a more
detailed analysis of the hydraulic valve, please see Section 12.3 where the model is
developed and used in greater detail. The general valve orifice equation may be
defined as follows:
p
Qx; PL KV x PS PL PT
where Q is the flow through the valve (volume/time); KV is the valve coefficient; x is
the percent of valve opening 1 < x < 1; PS is the supply pressure; PL is the
pressure dropped across the load; and PT is the return line (tank) pressure.
For this example let us define the operating point at x 0:5 and PL 500 psi.
The output Q is in gallons per minute (gpm) and the constants are defined as
gpm
KV 0:5 p
psi
PS 1500 psi, and
PT 50 psi
If only variations around the operating point are considered, the equation
becomes
QL Ky x KP PL
or
QL 15:4 x 0:004 PL
32 Chapter 2
As illustrated in the example, the linearized equation must be consistent with phy-
sical units and the user must be informed about what units are required when
implementing.
F Fk Fb F mg k y b y0 F mg my00
Table 2 Physical System Relationships
E 12 m v2 E 12 k x2 P b v2 m v F dt PF v
Mechanical- Inertia, J Spring, k Damper, b Torque, T ff Velocity, o Variable
rotational N-m-s2 N-m/rad N-m-s N-m rad/s Metric units
lb-in-s2 lb-in/rad lb-in-s lbf-in English units
Momentum Equation
rad/s
T Ja T ky T bo y odt
Energy/power
E 12 Jo2 E 12 ky2 P bo2 J! T dt P To
Electrical Inductance, L Capacitance, C Resistance, R Volts, V Current, I Variable
Henries, H Ohms,
Flux linkage Amps, Metric units
Equation
Farads, F A
V Ldi=dt V 1=C i dt V Ri L i V dt q i dt
E 12 L i2 E 12 C V 2 P 1=RV 2 PV i Energy/power
Hydraulic Fluid inertia Capacitance Orifice Pressure, p Flow rate, Q Variable
pneumatic N s2 / m5 m3 /Pa (linear) (m3 /s)/(N/m2 )1=2 Pascal, Pa m3 /s Metric units
1=2
lbf s2 / in5 in3 /psi (linear) (in3 /s)/(psi)
p psi, lb/in2 English units
Momentum Equation
in3 /s
p I dQ=dt p 1=C Q dt Q Kv P q Q dt
Energy/power
E 12 I Q2 E 12 C p2 PpQ Q I p dt Power p Q
Thermal N/A Capacitance Resistance, Rf Temperature, T Heat flow rate, q Variable
J/K K/W Kelvin, K Watts, W Metric units
lb-in/R R/btu Rankine, R Btu/s English units
Momentum Equation
T 1=C q dt q 1=Rf T
not used 1=C q dt Energy/power
Heat energy
ECT P 1=Rf T
33
34 Chapter 2
If we examine the motion about the equilibrium point (where kx mg), then the
constant force mg can be dropped out since the spring force due to the equilibrium
deflection always balances it. Separating the inputs on the right and the outputs on
the left then results in
m y00 b y0 k y F
or expressed as
d 2y dy
m 2
b kyF
dt dt
This should be review to anyone with a class in differential equations, vibra-
tions, systems modeling, or controls. Continuing on, we finish the mechanical-trans-
lational section with an example incorporating two masses interconnected, as shown
in Figure 13.
mb is the mass of the vehicle body.
mt is the mass of the tire.
ks is the spring constant of the suspension.
b is the damping from the shock absorber.
T 0 s Ky by0 T Jy00
and rearranging
d 2y dy
J b Ky T
dt2 dt
When compared with the second-order differential equation developed for the
simple mass-spring-damper model, the similarities are clear. Each model has inertia,
damping, and stiffness terms where the effects on the system are the same (natural
frequency and damping ratio) and only the units are different. One common case
with the rotational system is when the spring is removed and only damping is present
on the system. The second-order differential equation can now be reduced to a first-
order differential equation with the output of the system being rotary velocity instead
of rotary position. This model is common for systems where velocity, rather than
position, is the controlled variable.
d2q dq 1
L 2
R q Vin
dt dt C
d 2 VC dV
LC 2
RC C VC Vin
dt dt
Constructing the circuit and measuring the response easily verifies the resulting
equation. As already noted for other domains, the number of energy storage ele-
ments corresponds to the order of the system. The exception to this is when two or
more energy storage elements are acting together and can thus be combined and
represented as a single component. An example of this is electrical inductors in series;
the mechanical analogy of this would be two masses rigidly connected. Inductive and
capacitive elements both act as energy storage devices in Table 2.
about the desired operating point and relates flow to load pressure and valve posi-
tion. Let us begin by laying out the basic equations.
Valve flow:
Q dQ=dxx dQ=dPP Kx x KP P
Piston flow:
Q Ady=dt
Force balance:
d 2y
F m PA b dy=dt
dt2
Q is the flow into the cylinder, P is the cylinder pressure, dQ=dx is the slope of the
valve flow metering curve at the operating point, dQ=dP is the slope of the valve
pressure-flow (PQ) curve at the operating point, A is the area of the piston (minus
rod area), and b is the damping from the mass friction and cylinder seal friction.
Solving the valve flow equation for P:
P Q=KP Kx =KP x
Substitute into the force balance:
d 2y
F m Q=KP Kx =KP xA b dy=dt
dt2
Eliminating Q using the piston flow:
d 2y
A dy dy
m 2 Kx =KP x A b
dt KP dt dt
Finally, combining inputs and outputs results in
d 2y
A dy AKx
m 2 b x
dt KP dt KP
This equation is worthy of several observations. Hopefully, many will become
clear as we progress. First, notice that y in its nonderivative form does not appear.
As Laplace transforms are discussed we will see why we call this a type 1 system with
one integrator. The net effect at this point is to understand that a positive input x will
continue to produce movement in y regardless of whether or not x continues to
increase. That is, y, the output, integrates the position of x, the input. Also, many
assumptions were made in developing this simple model. Beginning with the valve, it
is linearized about some operating point. For small ranges of operation this is a
common procedure. Additionally, the mass of the valve spool was ignored and x is
considered to be moving the spool by that amount. In most cases involving control
systems, a solenoid would provide a force on the valve spool that would cause the
spool to accelerate and move. A separate force balance is then required on the spool
and the equations become more complex. Finally, the fluid in the system was
assumed to be without mass and incompressible. At certain input frequencies,
these become important items to consider. Without making these assumptions, the
result would have been a sixth-order nonlinear differential equation. This task is
40 Chapter 2
probably not enjoyable to most people. What does it take to accurately model such
systems? In a later section, bond graphs are used to develop higher level models.
In conclusion, we have reviewed the basic idea of using Newtons laws to
develop models of dynamic systems. Although only simple models were presented,
the procedure illustrated is the basic building block for developing complex models.
Most errors seem to be made in properly applying basic laws of physics when
complex models are developed. Soon the differential equations developed above
will be examined further and implemented in control system design.
Temperature
Heat flow and Heat stored C Temperature
R
If we assume that the water temperature inside the water heater is constant,
then the necessary equilibrium condition is that the heat added minus the heat
removed equals the heat stored. This is very similar to the liquid level systems
examined in the next section. The water heater system can be simplified as shown
in Figure 17 where cold water flows into the tank, a heater is embedded in the tank,
and hot water exits (hopefully).
low. Finally, instead of dealing with pressure as our effort variable, elevation head is
used as the effort variable. These are related through the weight density where:
Pressure (weight density) (elevation head) =
h
This leads to the governing equilibrium equation (law of continuity):
Flow in flow out Rate of change in stored volume
or
qin qout Cdh=dt
If h is the height of liquid in the tank, the C simply represents the cross-sectional area
of the tank (which may or may not be constant as the level changes). Of course in
reviewing previous examples and examining Table 2, this is simply an alternative
form of the electrical capacitor equation (I C dV=dt) or the thermal capacitor
equation from the previous section. To illustrate the concepts in an example, let
us consider the system of two tanks represented in Figure 18 and develop the differ-
ential equations for the system.
qi liquid flow rate into the system
qo liquid flow rate leaving the system from water exiting
qb liquid flow rate from tank 1 into tank 2
h1 liquid level (height) in tank 1
h2 liquid level (height) in tank 2
C1 capacitance of water in tank 1 (cross-sectional area)
C2 capacitance of water in tank 2 (cross-sectional area)
R1 resistance to flow between tank 1 and tank 2 (valve)
R2 resistance to flow between tank 2 and discharge port (valve)
Governing equation for tank 1:
qi qb C1 dh1 =dt
Governing equation for tank 2:
qb qo C2 dh2 =dt
Relationships between variables:
h1 h2 h2
qb qo
R1 R2
And simplifying:
dh1 dh2 R R
R1 C1 h1 R 1 q1 h2 and R2 C2 1 2 h2 2 h2
dt dt r1 R1
In the two resulting equations, each term has units of length (pressure head)
and the equations are coupled. The input to the system is qi and the desired output
(controlled variable) is h2 . In the next chapter we use Laplace transforms to find the
relationship between h2 and qi .
Several sections from now we will see why recognizing how different systems
relate is important. If you feel comfortable modeling one system and simulating its
response, then the ability to do many other systems is already present. This is an
important skill for a controls engineer since most systems include multiple sub-
systems in different physical domains.
P is the added power function, describing the dissipation of energy by the system,
and Qi is the generalized external forces acting on the system. Common energy
expressions for mechanical and electrical systems are given in Table 3.
EXAMPLE 2.6
This method is illustrated by revisiting the mass-spring-damper example we studied
earlier in Figure 12. First, let us write the expressions describing the kinetic and
potential energy for the system by referencing Table 3.
d
mx_ kx bx_ F
dt
Finally,
mx bx_ kx F
Remembering the notation where dx=dt x_ and d 2 x=dt2 x , we see that
identical differential equations were arrived at independently of each other for the
mass-spring-damper system.
Using energy methods simply gives the controls engineer another tool to use in
developing system models. For systems with many lumped objects, it becomes an
easy way to quickly develop the desired differential equations.
the methods that we already use and are familiar with. This being said, there are
many advantages to learning bond graphs themselves, as we will see in this section.
Rosenburg and Karnopp [1] who extended earlier work by Ezekial and Paynter
[2], helped define the power bond graph technique in 1983. Bond graphs can easily
integrate many different types of systems into one coherent model. Many of the
higher level modeling and simulation programs, such as Boeings EASY5TM, reflect
the principles found in bond graphs.
Bond graph models are formed on the basis of power interchanges between
components. This method allows a formal approach to the modeling of dynamic
systems, including assignment of causality and direct formulation of state equations.
Bond graphs rely on a small set of basic elements that interact through power bonds.
These bonds carry causality information and connect to ports. The basic elements
model ideal reversible (C, I) and irreversible (R) processes, ideal connections (0, 1),
transformations (TF, GY), and ideal sources (Se, Sf).
The benefits of bonds graphs include modeling power flow reduces the com-
plexity of diagrams, easy steps result in state space equations or block diagrams,
simple computer solutions, and the ability to easily determine causality on each
bond. Every bond has two variables associated with it, an effort and a flow. In a
mechanical-translational system, this is simply the force and velocity. This consti-
tutes a power flow since force times velocity equals power. Thus, Table 2 was given
as a precursor to bond graphs by including the effort and flow variable for each
system. For simple systems, it is just as easy to use the methods in the previous
section, but, as will be seen, for larger systems bond graphs are extremely powerful
and provide a structured modeling approach for dynamic systems.
This section seeks to introduce bond graphs as a usable tool; to completely
illustrate their capabilities would take much longer. In bond graphs, only four vari-
ables are needed, effort and flow and the time integrals of each. Thus, for the
mechanical system, force, velocity, momentum, and position are all the variables
needed. The bonds connect ports and junctions together that take several forms. The
two junction types are 0 junctions and 1 junctions. The 0 junction represents a
common effort point in the system while the 1 junction represents a common velocity
point in the system. Table 4 illustrates the relationship between the general bond
graph elements.
Using the notation in Table 4 and modifying Table 2 will complete the bond
graph library and allow us to model almost all systems quickly and efficiently. Table
5 illustrates these results for common systems.
Finally, words about bond graph notation. The basic bond is designated using
the symbol +. The half arrow points in the direction of power flow when e f is
positive. The direction of arrow is arbitrarily chosen to aid in writing the equations.
If the power flow turns out to be negative in the solution, then the direction of flow is
opposite of the arrow direction. The line perpendicular to the arrow designates the
causality of the bond. This helps to organize the component constitutive laws into
sets of differential equations. Physically, the causality determines the cause and effect
relationships for the bonds. Some bonds are restricted, while others are chosen
arbitrarily. Table 6 lists the causal assignments. To determine system causality:
1. Assign the necessary bond causalities. These result from the effort or flow
inputs acting on the system. By definition, an effort source imposes an effort on the
node and is thus restricted. This is like saying that a force input acting on a mass
Modeling Dynamic Systems 47
cannot define the velocity explicitly, it causes an acceleration which then gives the
mass a velocity. The opposite is true for flow inputs.
2. Extend those wherever possible using the restrictive causalities listed in the
table. Restrictive causalities generally are applied at 0, 1, TF, and GY nodes. For
example, since a 1 junction represents common flows, only one connection can define
(cause) the flow and thus only one causal stroke points toward the 1 junction. Not
meeting this requirement would be like having two electronic current sources
connected in series.
3. If any bonds are remaining without causal marks, apply integral causality to
I and C elements. Integral causality is preferred but not always possible and is
described in more detail following this section.
4. All remaining R elements can be arbitrarily chosen. For example, one typical
resistor element is a hydraulic valve. We can impose a pressure drop across the valve
and measure the resulting flow; or we can impose a flow through the valve and
measure the resulting pressure drop. An effort causes a flow, or a flow causes an
effort, and both are valid situations.
The causality assignments may also be reasoned out apart from the table.
For example, the effort, or force, must be the cause acting on an inductive
(mass) element and flow, or velocity, the effect, hence the proper integral
causality. The opposite analogy is the velocity being the cause acting on a mass
and the force being the effect. A step change in the cause (velocity) would then
require an infinite (impulse) force, which physically is impossible. With the
48
Variable
General et
f t p e dt q f dt Pt etf t Ep f dp,
kinetic
Eq e dq,
N m/sec N sec m W
V dp; J
V A Wb C W
i d, J (magnetic)
desired integral causality, a step change in the force results in acceleration, the
integral of which is the velocity.
If we end up with derivative causality on I or C elements, additional algebraic
work is required to obtain the state equations for the system. When integral causality
is maintained, the formulation of the state equations is straightforward and simple.
It is possible to have a model with derivative causality, but then great care is required
to ensure that the velocity cause is bounded, thus limiting the required force.
Many times it is possible to modify the bond graph to give integral causality
without changing the accuracy of the model. This might be as simple as moving a
capacitive element to the other side of a resistive element, and so forth. Using the
analogy of a hydraulic hose, the capacitance and resistance in the hose, and the
capacitance and inertia in the oil is distributed throughout the length of the hose.
If a flow source is the input to the section of hose, one of the capacitive elements must
be located next or else the inertial element sees the flow source as a velocity input,
similar to the mechanical analogy above, and creates derivative causality problems.
In reality this is correct since the whole length of hose, and even the fitting attaching
it to the flow source, has compliance. Every model is imperfect, and in cases like this
rational thinking beforehand saves many hours of irrational thinking later on in the
problem.
If we are constrained to work with models containing derivative causality,
several approaches may be attempted. Sometimes an iterative solution is achiev-
able and the implicit equations that result from derivative causality may still be
solved. This will certainly consume more computer resources since for each time
step many intermediate iterations may have to be performed. Another option is
to consider Lagranges equations as presented earlier. This may also produce an
algebraically solvable problem. The general recommendation, as mentioned above,
is to modify the original model to achieve integral causality and explicit state
equations.
Once causality is assigned, all that remains is writing the differential equations.
This is straightforward and easily lends itself to computers. There are many com-
50 Chapter 2
puter programs that allow you to draw the bond graph and have the computer
generate the equations and simulate the system. The equations, being in state
space form, are easily used as model blocks in Matlab. In fact, for many advanced
control strategies, where state space is the representation of choice, bond graphs are
especially attractive because the set of equations that result are in state space form
directly from the model. Several useful constitutive relationships for developing the
state space equations are given in Table 7.
0 Junction e1 e2 e3 NA NA NA
f1 f2 f3 0
1 Junction e1 e2 e3 0 NA NA NA
f1 f2 f3
R e Rf e ef NA NA
f 1=Re f f e
C e q=C fdt=C eq e f dt f Cde=dt f dqe=dt
I f p=I e dt=I f f p e Idf =dt e df p=dt
puter programs that allow you to draw the bond graph and have the computer
generate the equations and simulate the system. The equations, being in state
space form, are easily used as model blocks in Matlab. In fact, for many advanced
control strategies, where state space is the representation of choice, bond graphs are
especially attractive because the set of equations that result are in state space form
directly from the model. Several useful constitutive relationships for developing the
state space equations are given in Table 7.
0 Junction e1 e2 e3 NA NA NA
f1 f2 f3 0
1 Junction e1 e2 e3 0 NA NA NA
f1 f2 f3
R e Rf e ef NA NA
f 1=Re f f e
C e q=C fdt=C eq e f dt f Cde=dt f dqe=dt
I f p=I e dt=I f f p e Idf =dt e df p=dt
written using the tables relating to bond graphs. Begin with the two state variables,
p1 and q2 , to write the equations.
dp1 =dt e1 e3 e2 e4
dq2 =dt f2 f3 f4 p1 =I
Writing specific equations for e3 ; e2 , and e4 allows us to finish the equations.
e3 Se input force
e2 q2 =C
e4 Rf4 R=Ip1
Substituting into the state equations for the final form results in
dp1 =dt Se q2 =C R=Ip1
dq2 =dt p1 =I
Remembering that Se F; C 1=k, I m, and R b for mechanical systems
allows us to write the state equations in a notation similar to that used for earlier
equations:
dp1 =dt F kq2 b=mp1
dq2 =dt p1 =m
Although first glance might seem confusing, the equations are identical to those
developed before. Notice what the following terms actually represent:
dp1 =dt dmv=dt ma F 0 s
k q2 k x spring force
b=mp b=mmv bv force through damper
dq2 =dt p1 =m mv=m v velocity
Since the example state equations developed above are linear, they could easily
be transformed into matrix form. For a simple mass-spring-damper system, the work
involved might seem more difficult than using Newtons laws. As system complexity
progresses, the real power of bond graphs becomes clear.
52 Chapter 2
Constitutive relationships:
e1 S e
e3 e4 q4 =C1
f3 f2 p2 =I1
f5 d2 =d1 f6 d2 =d1 f8 d2 =d1 p8 =I2
e6 d1 =d2 e5 d1 =d2 e4 d1 =d2 q4 =C1
e7 Rf7 Rf8 Rp8 =I2
e9 q9 =C2
f8 p8 =I2
In summary, for the mechanical and electrical systems shown thus far, the basic
procedures are the same: Locate the common effort and flow points, draw the bond
graph, assign causality, and write the state equations describing the behavior of the
system.
and C, are used to model thermal systems and are still subject to the same restrictions
discussed in Section 2.4.5.
To illustrate a bond graph model of a thermal system, we once again use the
water heater model in Figure 17. This allows us to see the similarities and compare
the results with the equations already derived in Section 2.4.5. To begin drawing the
bond graph, we still find common effort (temperature) and flow (heat flow rate)
points in the physical system. The common temperature points include the tank itself
and the water flow leaving the tank and the temperature of the surrounding atmo-
sphere. Since the air temperature surrounding the tank is assumed to remain con-
stant, we can model it as an effort source. The temperature difference between the
tank water temperature and surrounding air will cause heat flow through the insula-
tion, modeled as an R element. Finally, examining the connections to the 0 junction
representing the temperature of the tank, we have three flow sources and the differ-
ence causing a temperature change in the water. The energy stored in the tank water
temperature is modeled as a capacitance, C. The three heat flow sources are the water
in, water out, and heater element embedded in the tank. Putting it all together results
in the bond graph shown in Figure 26.
When we write the equations for the bond graph, we see immediately that they
are the same as developed earlier in Section 2.4.5.
On the 0 junction:
f4 f5 f6 f7 f3 qi qh qo qa (using earlier notation)
On the 1 junction:
e3 e1 e2 ; or e3 e 1 e2
q_ 2 f q2 ; q6 ; Sf q_ 6 f q2 ; q6 ; Sf
Basic equations:
dq2 =dt f2 f1 f3 f1 f4
dq6 =dt f6 f5 f7
Constitutive relationships:
f1 Sf qi
f4 1=R1 e4 1=R1 e3 e5 1=R1 e2 e6
f5 f4 (see above equation)
f7 1=R2 e7 1=R2 e6 1=R2 q6 =C2
Combining for the general state equations:
dq2 =dt qi q2 =C1 q6 =C2 =R1
dq6 =dt q2 =C1 q6 =C2 =R1 q6 =R2 C2
Finally, in matrix form with notation substitutions:
1 1
2 3
q_ 2 7 q2 1
6 R1 C1 R1 C2
46 7 qi
q_ 6 1 1 5 q6 0
R1 C1 C2 R2 R1
Remember that a model is just what the name implies, only a model, and
different models may all be derived from the same physical system. Different assump-
tions, use of notation, and style will result in different models. In examining the
variety of systems using bond graphs, hopefully we were able to see the parallels
between different systems and the advantages of a structured approach to modeling.
More often than not, modeling complex multidisciplinary systems is just a repetitive
application of the fundamentals. This concept is evident in the case study involving a
hydraulic, mechanical, and hydropneumatic coupled system.
and capacitance plus the inertia and capacitance of the oil. The valves then switch
this line from low to high pressure during the simulation. Flywheel, pump/motor,
and accumulators are all included in the model. The reduced model showing the
system states is given in Figure 31. In the figure, there are three states in this
section of hydraulic line: oil volume stored in line capacitance, oil momentum,
and oil volume stored in fluid capacitance. There are three states in energy
storage devices: flywheel momentum, oil volume stored in low-pressure reservoir,
and oil volume stored in high-pressure accumulator. This model leads directly to
a bond graph and illustrates the flexibility in using bond graphs. Figure 32 gives
the bond graph with causality strokes. As in the previous examples, we first
locate the common flow and effort points and assign appropriate 0 or 1 junctions
to those locations.
The bond graph is parallel to the physical model with the flywheel connected to
the pump using a transformer node (TF). This converts the flywheel speed to a flow
rate. Nodes 0a, 1b, and 0c model the inertial of the oil, resistance of the hose, and
capacitance of the hose and oil. Common flow nodes 1d and 1e state that all the flow
passing through a valve must be the same as that transferred to the accumulator
connected to that valve. It is possible to connect block diagram elements to bond
graph elements as shown in the switching commands to the valves.
In this bond graph model there are no required causality assignments,
effort or flow sources, and the system response is a function of initial conditions
and valve switching commands. To assign the causal strokes, we start with the
integral relationships first and then assign the arbitrary causal strokes. This
model achieves integral causality, although other model forms may result in
derivate causality. For example, combining the line and oil capacitance into an
equivalent compliance in the system and updating the bond graph results in
derivative causality, even though the model is still correct and simpler. So as
stated earlier, some derivative assignments can be handled by slightly changing
the model.
The bond graph is now ready for writing the state equations, and although
more algebraic work is required for each step, the procedure to develop the equa-
tions is identical to that used in the mass-spring-damper example. The resulting
states will be p1 , q3 , p6 , q8 , q13 , and q14 (all the energy storage elements).
Basic equations:
dp1 =dt e1
q3 =dt f3 f2 f4
dp6 =dt e6 e4 e5 e7
dq8 =dt f8 f7 f9 f10
dq13 =dt f13 f11
dq14 =dt f14 f12
Constitutive relationships:
e1 Dpm e2 Dpm e3 Dpm q3 =Chose
f3 f2 f4 Dpm f1 f6 Dpm =Iflw p1 1=Ioil p6
e6 e4 e5 e7 e3 Rhose f5 e8 q3 =Chose Rhose =Ioil p6 q8 =Coil
f8 f7 f9 f10 f6 f13 f14 (see below)
f6 p6 =Ioil
f13 Cd16 q8 =Coil e13 0:5
f14 Cd20 q8 =Coil e14 0:5
Accumulator models (e13 and e14 ):
To find the pressure in the accumulators (e13 and e14 ), we must first choose an
appropriate model for the gas-charged bladder. Most hydropneumatic accumulators
are charged with nitrogen and can be reasonably modeled using the ideal gas law.
62 Chapter 2
For the accumulators in this case, study foam was inserted into the bladder with the
result that the pressure-volume relationship during charging becomes isothermal and
efficiencies are greatly increased [4]. This allows us to finish the accumulator (Chigh
and Clow ) models for the state equations as follows:
Let P1 and V1 be the initial charge pressure and gas volume: eh and qh ; el and ql
for the high- and low-pressure accumulators. Then [efforts P2 P1 V1 =V2 :
e13 eh qh =qhi q13
e13 el ql =ql q14
This now gives us the accumulator pressures as a function of state variables q13
and q14 , and allows us to finish writing the state equations for the system. The third
state, q8 , references f13 and f14 , which we now have since they are simply the first
derivatives of states q13 and q14 , solved for in this section.
Combining for the general state equations:
Dpm
p_ 1 q
Chose 3
p1 p
q_ 3 Dpm 6
Iflw Ioil
q3 R q
p_ 6 hose p6 8
Chose Ioil Coil
s
s
p q8 e h qh q8 e l ql
q_ 8 6 s1 Cd16 s2 Cd20
Ioil Coil qh q13 Coil q1 q14
s
q8 e h qh
q_ 13 Cd16
Coil qh q13
s
q8 el qi
q_ 14 Cd20
Coil ql q14
As presented later in the state space analysis section, 3.6.3, once the state
equations are developed programs like Matlab, MathCad, and Mathematica may
be used to simulate and predict system response. We can also write a numerical
integration routine to accomplish the same thing using virtually any programming
language. A second item to mention is the nonlinearity of the state equations.
Modeling the flow through the valves as proportional to the square root of the
pressure drop introduces nonlinearities into the state equations. The nonlinearities
prevent us from writing the state equations using matrices unless we first linearize the
state equations. Finally, s1 and s2 represent changing inputs. When the system
response is determined, the inputs to the equations must be provided. Due to the
Modeling Dynamic Systems 63
!
5 m3 in3
Cd20 2:10 10 pffiffiffiffiffi 106:4 pffiffiffiffiffiffi
sec Pa sec psi
These models were verified in the lab with good correlation between the pre-
dicted and actual responses, as illustrated in Figures 33 and 34. Valve delay functions
simulating the opening and closing characteristics determined in the lab were imple-
mented to determine optimal delays. Using these values and integrating the state
equations yielded responses with natural frequencies of 219 rad/sec and damping
ratios of 0.06. The correlation was very high with the experimental switches; the
exception was that the actual system was slightly softer (in terms of bulk modulus)
than was calculated using the manufacturers data.
When the results are compared directly, as in Figure 35, the accuracy of the
simulation is evident. It certainly is possible to tune the estimated system para-
meters used above to achieve better matches since the same dynamics are present in
the simulated and experimental results.
The model for this example was used to design a delay circuit and strategy for
performing switches minimizing energy losses and pressure spikes. The analogy can
be extended to the poppet valve controller examined in the case study (see Sec. 12.7).
The simulated and experimental results agree quite well, as shown, and the simula-
tion method does not require any special software.
Hopefully this illustrates the versatility of bond graphs in developing system
models utilizing several physical domains. Another case study example is examined
later and compared to an equivalent model in block diagram representation. Bond
graphs are especially applicable to fluid power systems with the use of flow and effort
variables and flow rate and pressure being well understood. The use of many trans-
formers like cylinders and pump/motors are easily represented with bond graphs.
As a result, many fluid power systems have been developed using bond graphs,
ranging from controllers [5] to components [6] to systems.
PROBLEMS
2.1 Describe why good modeling skills are important for engineers designing,
building, and implementing control systems.
2.2 The most basic mathematical form for representing the physics (steady-state
and dynamic characteristics) of different engineering systems is a
_____________ ______________.
2.3 Ordinary differential equations depend only on one independent variable, most
commonly ______ in physical system models.
2.4 Find the equivalent transfer function for the system given in Figure 36.
2.5 Find the equivalent transfer function for the system given in Figure 37.
2.6 Find the equivalent transfer function for the system given in Figure 38.
2.7 Find the equivalent transfer function for the system given in Figure 39.
2.8 Find the equivalent transfer function for the system given in Figure 40.
2.9 Find the equivalent transfer function for the system given in Figure 41.
66 Chapter 2
2.10 Find the equivalent transfer function for the system given in Figure 42.
2.11 Find the equivalent transfer function for the system given in Figure 43.
2.12 Find the equivalent transfer function for the system given in Figure 44.
2.13 Given the following differential equation, find the appropriate A, B, C, and D
...
matrices resulting from state space representation: y 5 y 32y 5u_ u:
2.14 Given two second-order differential equations, determine the appropriate A, B,
C, and D matrices resulting from state space representation (z is the desired output, u
and v are inputs): a y by_ y 20u; and cz z_ z v:
2.15 Linearize the following function, plot the original and linearized functions, and
determine where an appropriate valid region is (label on the graph).
Yx 5x2 x3 x sinx About operating point x 2
2.16 Linearize the following function:
Zx; y 3x2 xy2 y About operating point x; y 2; 3
2.17 Linearize the equation y f x1 ; x2 :
yx1 ; y1 4x1 x2 5x21 4x2 sinx2 Around operating point x1 ; x2 1; 0
2.18 Given the physical system model in Figure 45, develop the appropriate differ-
ential equation describing the motion.
2.19 Given the physical system model in Figure 46, develop the appropriate differ-
ential equation describing the motion.
2.20 Given the physical system model in Figure 47, develop the appropriate differ-
ential equation describing the motion. (r is the input, y is the output).
2.21 Write the differential equations for the physical system in Figure 48.
2.22 Given the physical system model in Figure 49, write the differential equations
describing the motion of each mass.
Modeling Dynamic Systems 69
2.29 Determine the equations describing the system given in Figure 56. Formulate
2.30 Determine the equations describing the system given in Figure 57. Formulate
as time derivatives of h1 and h2 .
2.31 Derive the differential equations describing the motion of the solenoid and
mass plunger system given in Figure 58. Assume a simple solenoid force of FS KS i.
2.32 Develop the bond graph and state space matrices for the system in Problem 2.18.
2.33 Develop the bond graph and state space matrices for the system in Problem 2.19.
2.34 Develop the bond graph and state space matrices for the system in Problem 2.20.
72 Chapter 2
2.35 Develop the bond graph and state space matrices for the system in Problem 2.21.
2.36 Develop the bond graph and state space matrices for the system in Problem 2.22.
2.37 Develop the bond graph and state space matrices for the system in Problem 2.23.
2.38 Develop the bond graph and state space matrices for the system in Problem 2.24.
2.39 Develop the bond graph and state space matrices for the system in Problem 2.25.
2.40 Develop the bond graph and state space matrices for the system in Problem 2.26.
2.41 Develop the bond graph and state space matrices for the system in Problem 2.27.
2.42 Develop the bond graph and state space matrices for the system in Problem 2.31.
REFERENCES
1. Rosenburg RC, Karnopp DC. Introduction to Physical System Dynamics. New York:
McGraw-Hill, 1983.
2. Ezekial FD, Paynter H. Computer representation of engineering systems involving fluid
transients. Trans ASME, Vol. 79, 1957.
3. Lumkes J, Hartzell T. Investigation of the Dynamics of Switching from Pump to Motor
Using External Valving. ASME Publication, no. H01025, IMECE, 1995.
4. Pourmovahead A, Baum S. Experimental evaluation of hydraulic accumulator efficiency
with and without elastomeric foam. J Propuls Power 4:185192, 1988.
5. Barnard B, Dransfield P. Predicting response of a proposed hydraulic control system using
bond graphs. Dynamic Syst, Measure Control, March 1977.
6. Chong-Jer L, Brown F. Nonlinear dynamics of an electrohydraulic flapper nozzle valve.
Dynamic Syst., Measure Control, June 1990.
3
Analysis Methods for Dynamic
Systems
3.1 OBJECTIVES
Introduce different methods for analyzing models of dynamic systems.
Examine system responses in the time domain.
Present Laplace transforms as a tool for designing control systems.
Present frequency domain tools for designing control systems.
Introduce state space representation.
Demonstrate the equivalencies between the different representations.
3.2 INTRODUCTION
In this chapter four methods are presented for simulating the response of dynamic
systems from sets of differential equations. Time domain methods are seldom used
apart from first- and second-order differential equations and are usually the first
methods presented when beginning the theory of differential equations. The s-
domain (or Laplace, also in the frequency domain) is common in controls engi-
neering since many tables, computer programs, and block diagram tools are avail-
able. Frequency domain techniques are powerful methods and useful for all
engineers interested in control systems and modeling. Finally, state space techni-
ques have become much more common since the advent of the digital computer.
They lend themselves quite well to computer simulations, large systems, and
advanced control algorithms.
It is important to become comfortable in using each representation. We will see
that using the different representations is like speaking different languages. The same
message may be spoken while speaking a different language. Each representation can
be translated (converted) into a different one. However, some techniques lend them-
selves more to one particular way, other techniques to another way. Once we see that
they really give us the same information and we become comfortable using each one,
we expand our set of usable design tools.
75
3
Analysis Methods for Dynamic
Systems
3.1 OBJECTIVES
Introduce different methods for analyzing models of dynamic systems.
Examine system responses in the time domain.
Present Laplace transforms as a tool for designing control systems.
Present frequency domain tools for designing control systems.
Introduce state space representation.
Demonstrate the equivalencies between the different representations.
3.2 INTRODUCTION
In this chapter four methods are presented for simulating the response of dynamic
systems from sets of differential equations. Time domain methods are seldom used
apart from first- and second-order differential equations and are usually the first
methods presented when beginning the theory of differential equations. The s-
domain (or Laplace, also in the frequency domain) is common in controls engi-
neering since many tables, computer programs, and block diagram tools are avail-
able. Frequency domain techniques are powerful methods and useful for all
engineers interested in control systems and modeling. Finally, state space techni-
ques have become much more common since the advent of the digital computer.
They lend themselves quite well to computer simulations, large systems, and
advanced control algorithms.
It is important to become comfortable in using each representation. We will see
that using the different representations is like speaking different languages. The same
message may be spoken while speaking a different language. Each representation can
be translated (converted) into a different one. However, some techniques lend them-
selves more to one particular way, other techniques to another way. Once we see that
they really give us the same information and we become comfortable using each one,
we expand our set of usable design tools.
75
76 Chapter 3
EXAMPLE 3.1
A general first-order ODE:
y0 ay 0
Substitute in y ert :
r ert a ert 0
r a ert 0
Gives the auxiliary equation
ra0
Analysis Methods for Dynamic Systems 77
Solution:
y A ert A eat
If the initial condition, y(0) = y0 ,
y0 y0 A
y y0 eat
y0 pty gt
There are other methods to handle special cases of first-order differential equa-
tions such as separating the differential and performing two integrals special cases
with exact derivatives. While certainly interesting, these methods are seldom used in
designing controllers. For more information, any introductory textbook on differ-
ential equations will cover these topics.
We conclude this section by examining the solution methods for homogeneous,
second-order, linear ODEs. It is quite simple using the auxiliary equation as shown
for general first-order differential equations.
3.3.1.2 Solutions to General Second-Order Ordinary Differential
Equations
A general second-order ODE:
y00 k1 y0 k2 y 0
Auxiliary equation:
r2 k1 r k2 0
There are three cases that depend on the roots of the auxiliary equation. We
can use the quadratic equation to find the roots since second-order ODEs will result
in second-order polynomials for the auxiliary equations. Our general form of the
auxiliary equation and the corresponding quadratic equation can then be expressed
as
p
2 b b2 4ac
a r b r c 0 and r1 r2
2a 2a
Using the quadratic equation leads to the three possible combinations of roots:
78 Chapter 3
y A1 eat A2 ebt
Case 1 occurs when the term (b2 4ac) is greater than zero. Both roots will be real
and may be either positive or negative. Positive roots will exhibit exponential growth
and negative roots will exhibit exponential decay (stable). Applying the initial posi-
tion and velocity conditions to the solution solves for the constants A1 and A2 .
Case 2: Real and repeated roots at r:
y A1 ert A2 t ert
Case 2 occurs when the term (b2 4ac) is equal to zero. Both roots will now have the
same sign. As before, positive roots will result in unstable responses, negative roots
in stable responses, and initial conditions are still used to solve for the constants A1
and A2 .
Case 3: Two imaginary roots p j u :
Case 3 occurs when the term (b2 4ac) is less than zero. Roots are always complex
conjugates, and each root will have the same real component (s = b/2a), which
determines the stability of the response. The sinusoidal terms, arising from the
complex portions of the roots, only range between 1 and simply oscillate within
the bounds set by the exponential term est at a damped natural frequency equal to od .
Mathematically, the sinusoidal terms come from the application of Eulers theorem
when the roots of the auxiliary equation are expressed as r1;2 s = jod . Then:
f tan1 A1 =A2
In general, it is convenient to use to the original form when applying initial
conditions to solve for A1 and A2 . Plotting the time response is easily accomplished
using either form.
Analysis Methods for Dynamic Systems 79
EXAMPLE 3.2
Differential equation and initial conditions for case 1, two real roots:
y00 5 y0 6y 0 and y0 0; y0 0 1
Auxiliary equation:
r2 5r 6 r 2r 3 0
r 2; 3
Solution process using initial conditions:
y A1 e2t A2 e3t
y0 2 A1 e2t 3 A2 e3t
At t 0 these evaluate to
A1 A2 0 and 2 A1 3 A2 1
y e2t e3t
It is easy to see then, that for auxiliary equations resulting in two real roots, the
general response can be described as the sum of two first-order responses.
EXAMPLE 3.3
Differential equation and initial conditions for case 2, two repeated real roots:
y00 8 y0 16y 0 and y0 1; y0 0 1
Auxiliary equation:
r2 8r 16 r 42 0
r 4; 4
Solution process using initial conditions:
y A1 e4t A2 te4t
At t 0 these evaluate to
A1 1 and 4 A1 A2 1
y e4t 5 t e4t
80 Chapter 3
EXAMPLE 3.4
Differential equation and initial conditions for case 3, complex conjugate real roots:
y00 2 y0 10y 0 and y0 1; y0 0 1
Auxiliary equation:
r2 2 r 10 0
r 1 3j
Solution process using initial conditions:
y A1 et cos3 t A2 et sin3 t
y0 A1 et cos3 t 3A1 et sin3 t A2 et sin3 t 3A2 et cos3 t
At t 0 these evaluate to
A1 1 and A1 A2 1
which gives A1 1; A2 2. Final solution is
y et cos3 t 2et sin3 t
Finally, when plotting these responses as a function of time, the arguments of the
sinusoidal terms are expected to have units of radians to be correctly scaled.
Some calculators and computer programs have a default setting where the
arguments of the sinusoidal terms are assumed to be in degrees. Remember that
the responses derived here are all for homogeneous differential equations (no forcing
function). The following sections show us techniques to use for deriving the time
responses for nonhomogeneous differential equations.
dct
t ct Unit Step Input 1t0
dt
Output ct 1 et=t
Since we used a unit step input (magnitude 1) and an initial condition equal
to zero, we call the solution a normalized first-order system step response. Plotting
the response as a function of the independent variable, time, results in the curve
shown in Figure 1.
So we see that the final magnitude exponentially approaches unity as time
approaches infinity. Being familiar with the normalized curve is useful in many
respects. By imposing a step input on our physical system and recording the
response, we can compare it to the normalized step response. If it exponentially
grows or decays to a stable value, then we can easily extend the data into a simplified
model. Examining the plot further allows us to draw several additional conclusions.
Even if our input or measured response does not reach unity, a linear first-order
system will always reach a certain percentage of the final value as a function of the
time constant. As shown on the plot in Figure 1, we know that the following values
will be reached at each repeated time constant interval:
Any intermediate value can also be found by calculating the magnitude of the
response at a given time using the analytical equation.
EXAMPLE 3.5
Now lets consider the simple RC circuit in Figure 2 and see how this might be used
in practice. When we sum the voltage drops around the loop (Kirchhoffs second
law), it leads to a first-order linear differential equation.
Sum the voltages around the loop:
Vin VR VC 0
Constitutive relationships:
VR R I
dV
I C C
dt
Combining:
dVC
RC VC Vin
dt
Taking the differential equation developed for the RC circuit in Figure 2 and
comparing it with the generalized equation, once we let t RC we have the same
equation. For a simple RC circuit then, the time constant is simply the value of the
resistance multiplied by the value of the capacitance. For the RC circuit shown, if R
1K
and C 1 mF, then the time constant, t, is 1 second. If the initial capacitor
voltage was zero (the initial condition) and a switch was closed suddenly connecting
10 volts to the circuit (step input with a magnitude of 10), we would expect then at
1 second to have 6.3 volts across the capacitor (the output variable),
2 seconds to have 9.5 volts,
3 seconds to have 9.8 volts, and so forth.
By the time 5 seconds was reached, we should see 9.93 volts. So we see that if
we know the time constant of a linear first-order system, we can predict the response
for any step input of a known magnitude. Chances are that you are already familiar
with time constants if you have chosen transducers for measuring system variables.
Knowing the time constant of the transducer will allow you to choose one fast
enough for your system.
3.3.2.2 Second-Order Step Response Characteristics
Whereas in a first-order system the important parameter is the time constant, in
second-order systems there are two important parameters, the natural frequency,
on , and damping ratio, z. As before, depending on which of the three cases we have,
we will see different types of responses.
For systems with two real roots, we see that the response can be broken down
into the sum of two individual first-order responses. We call this case overdamped.
There will never be any overshoot, and the length of time it takes the output to reach
the steady-state value depends on the slowest time constant in the system. The faster
time constant will have already reached its final value and the transient effects will
disappear before the effects from the slower time constant. In overdamped second-
order systems, the total response may sometimes be approximated as a single first-
order response when the difference between the two time constants is large. That is,
the slow time constant dominates the system response.
When second-order systems have auxiliary equations producing real and
repeating roots, we have a unique case where the system is critically damped.
Although numerically possible, in practice maintaining critical damping may be
difficult. Any small errors in the model, nonlinearities in the real system, or changes
in system parameters will cause deviation away from the point where the system is
critically damped. This occurs since critical damping is only one point along the
continuum, not a range over which it may occur.
Finally, and not related to a combination first-order response, is when the
auxiliary equation produces complex conjugate roots. The system now overshoots
the steady-state value and is underdamped. When we speak of second-order systems,
the underdamped case is often assumed. Much of the work in controls deals with
designing and tuning systems with dominant underdamped second-order systems.
For these reasons the remaining material in this section is primarily focused on
underdamped second-order systems. Many of the techniques are also applied to
overdamped systems even though they can just as easily be analyzed as two first-
order systems. To begin with, let us recall the form of the complex conjugate roots of
the auxiliary equation as presented earlier:
a r2 b r c 0 and r1;2 s j o
In terms of natural frequency and damping ratio, we can write the roots as
r2 2 z on r o2n and r1;2 z on j od
where od is the damped natural frequency defined as
qffiffiffiffiffiffiffiffiffiffiffiffiffi
od on 1 x2
The negative sign in front of z on comes from the negative sign in front of the
quadratic equation. As long as the coefficients of the second-order differential equa-
tion are positive, this sign will be negative and the system will exponentially decay
(stable response). Using this notation for our complex conjugate roots, we can write
the generalized time response as
pffiffiffiffiffiffiffiffiffiffiffiffiffi!
2
ezon t 1 1 x
qffiffiffiffiffiffiffiffiffiffiffiffiffi
2
ct 1 pffiffiffiffiffiffiffiffiffiffiffiffiffi sin on 1 x t tan
1 x2 x
To see how the natural frequency and damping ratio affect the step response, it
may be helpful to view the equation above in a much simpler form to illustrate the
effects of the real and imaginary portions of the roots. Combining all constants into
common terms allows us to write the time response as follows:
84 Chapter 3
exon t
ct 1 sinod t f
K1
Since the sine term only varies between 1, the magnitude, or bounds on the
plot, are determined by the term exon t . Recognizing that the coefficient on the
exponential, zon , is actually the real portion of our complex conjugate roots
from the auxiliary equation, s, we can say that the real portion of the roots deter-
mine the rate at which the system decays. This is similar to our definition of a time
constant and functions in the same manner. Coming back to the sinusoidal term, we
see that it describes the oscillations between the bounds set by the real portion of our
roots and it oscillates at the damped natural frequency of od . Thus, the imaginary
portion of our roots determines the damped oscillation frequency for the system.
Figure 3 shows this relationship between the real and imaginary portion of our roots.
These concepts are fundamental to the root locus design techniques developed in the
next chapter.
In general, when plotting a normalized system, instead of a single curve we now
get a family of curves, each curve representing a different damping ratio. When a
system has a damping ratio greater than 1, it is overdamped and behaves like two
first-order systems in series. The normalized curves for second-order systems are
given in Figure 4. All curves shown are normalized where the initial conditions
are assumed to be zero and the steady-state value reached by every curve is 1.
As was done with the first-order plot using the output percentage versus the
number of time constants, it is useful to define parameters measured from a second-
order plot that allow us to specify performance parameters for our controllers.
Knowing how the system responds allows us to predict the output based on chosen
values of the natural frequency and damping ratio or to determine the natural
frequency and damping ratio from an experimental plot. Useful parameters include
rise time, peak time, settling time, peak magnitude or percent overshoot, and delay
time. Figure 5 gives the common parameters and their respective locations on a
typical plot. Knowing only two of these parameters will allow us to reverse-engineer
a black box system model from an experimental plot of our system. Since there are
two unknowns, on and z, we need two equations to solve for them.
1 1
Rise time, tr ; tr cycle
2 p on 4
If the system is underdamped, generally measured as the time to go from 0
to 100% of the final steady-state value or the first point at which it crosses
the steady-state level. If the system is overdamped, it is usually measured as
the time to go from 10% to 90% of the final value.
p
Peak time, tp ; tp pffiffiffiffiffiffiffiffiffiffiffiffiffi
on 1 x2
Time for the response to reach the first peak (underdamped only).
4
Settling time, ts ; ts 4t
x on
Time for the response to reach and stay within either a 2% or 5% error
band. The settling time is related to the largest time constant in the system.
Use four time constants for 2% and three time constants for 5%. This
equation comes from the bounds shown in Figure 3 where 1=t equals xon .
Remember that at four time constants the system has reached 98% of its
final value.
p2
Percent overshoot (%OS), %OS 100 e px= 1x
Figure 6 Percent overshoot from a step input as a function of damping ratio for a second-
order system.
Analysis Methods for Dynamic Systems 87
equation developed for the mass-spring-damper system earlier and relate the con-
stants m, b, and k to natural frequency and damping ratio.
EXAMPLE 3.6
The differential equation for the mass-spring-damper system, as developed in
Chapter 2, is given below:
m x00 b x0 k x F
Divide the equation by m:
x00 b=m x0 k=m x F=m
And compare coefficients with the auxiliary equation written in terms of the natural
frequency and damping ratio:
x00 2 z on x0 o2n x F=m
Thus, for the m-b-k system:
o2n k=m and 2z on b=m
By noting where m, b, and k appear in the first two representations, the natural
frequency and damping ratio are easily calculated for all linear, second-order ODEs.
This allows us to define a single time response equation with respect to the natural
frequency and damping ratio using the generalized response developed above.
Once we write other general second-order differential equations in the form
and calculate the system natural frequency and damping ratio, then we can easily
plot the systems response to a step input. It is important to remember that the
generalized response, plot parameters, and methods are developed with respect to
step inputs. If we desire the systems response with respect to other inputs, the
methods described in the following section are more useful.
most cases we know what type of response we will have in the time domain simply by
looking at our system in the s-domain. The goal of this section then is to show
enough examples for us to make that connection between what equivalent systems
look like in the s-domain and in the time domain.
Using Laplace transforms requires a quick review of complex variables. For
the transform, s s jo, where s is the real part of the complex variable and o is
the imaginary component. This notation was introduced earlier when discussing the
complex conjugate roots from the auxiliary equation. Fortunately, although helpful,
algebraic knowledge of complex variables is seldom required when using the s-
domain. Using the method as a tool to understanding and designing control systems,
we primarily use the Laplace transform of f t and the inverse Laplace transform of
Fs through the use of tables. Making the following definitions will help us use the
transforms.
A benefit of using this method as a tool means that we seldom (if ever) need
to do the actual integration since tables have been developed that include almost
all transforms we ever need when designing control systems. Looking at the
equations above gives us an appreciation when using the tables regarding the
time that is saved. A table of Laplace transform pairs has been included in
Appendix B and some common transforms that are used often are highlighted
here in Table 1. Additional tables are available from many different sources.
The outline for using Laplace transforms to find solutions to differential equa-
tions is quite simple.
To better illustrate the solution steps, let us take a general ordinary differential
equation, include some initial conditions, and solve for time solution.
Analysis Methods for Dynamic Systems 89
Identities
Constants LA f t A Fs
Addition Lf1 t f2 t F1 s F2 s
First derivative df t
L s Fs f 0
dt
" #
Second derivative d 2 f t df 0
L s2 Fs s f 0
dt2 dt
n n
General derivatives d X nk k1
L f t sn Fs s f 0
dtn k1
Integration Fs
L f tdt
s
Unit impulse t 1
Unit step 1t; t > 0 1
s
Unit ramp t 1
s2
Common Transform Pairs f t, Time Domain Fs, Laplace Domain
2 2
Xs
s2 6s 5 s 1s 5
IV. Perform the inverse Laplace transform.
From the tables we see a similar transformation that will meet our needs:
ba
L1 eat ebt
s as b
Rearrange Xs to match the table entry found in Appendix A
(constants carry through):
2 1 4
a 1; b 5
s 1s 5 2 s 1s 5
Then the time response can be expressed as
1 t
xt e e5t
2
So we see that, at least in some cases, the solution of an ODE using Laplace
transforms is quite straightforward and easy to apply. What happens more often
than not, or so it seems, is that the right match is not found in the table and we must
manipulate the Laplace solution before we can use an identity from the table.
Sometimes it is necessary to expand the function in the s-domain using partial
fraction expansion to obtain forms found in the lookup tables of transform pairs.
Many computer programs are also available to assist in Laplace transforms
and inverse Laplace transforms. In most cases the program must also have symbolic
math capabilities.
d 2x dx
6 5 x 0; x0 2; x_ 0 2
dt dt
s2 Xs s x0 x_ 0 6 s Xs 6 x0 5 Xs 0
s2 6s 5Xs 2 s 14
2s 14 2s 14
Xs 2
s 6s 5 s 1s 5
With the addition of the s term in the numerator, we no longer find the solution
in the table. Using partial fraction expansion will result in simpler forms and allow
the use of the Laplace transform pairs found in Appendix B. It is possible for most
cases to find a table containing our form of the solution (in dedicated books contain-
ing transform pairs) but to include all of the possible forms makes for a confusing
and long table. Also, remember that these techniques are more importantly learned
for the connection they allow us to make between the s-domain and the time domain,
then for the reason that it is a common task when designing control systems (in
general it is not).
For the partial fraction expansion then, let the solution Xs equal a sum of
simpler terms with unknown coefficients:
2s 14 K K
1 2
s 1s 5 s 1 s 5
To find the coefficients, we multiply through both sides by the factor in the
denominator and let the value of s equal the root of that factor. Repeating this for
each term allows us to find each coefficient Ki . The process is given below for finding
K1 and K2 :
To solve for K1 : [multiply through by (s 1)]
2s 14 K1 s 1 K2 s 1 K s 1
K1 2
s 5 s1 s5 s5
Now let s ! 1 and we can find K1 :
2s 14 12 k2 s 1
K1 K1 3
s 5 s1 4 s 5 s1
Repeat the process to find K2 [multiply through by (s 5)]:
2s 14 K1 s 5 K2 s 5 K1 s 5
K2
s 1 s1 s5 s1
2s 14 4 K1 s 5
K2 K2 1
s 1 s5 4 s 1 s5
The result of our partial fraction expansion becomes:
2s 14 3 1
Xs
s 1s 5 s 1 s 5
Now the inverse Laplace transform is straightforward using the table and
results in the time response of
92 Chapter 3
xt 3 et e5t
An alternative method, preferred by some, is to expand out both sides and
equate the coefficients of s to solve for the coefficients. In some cases this leads to
simultaneously solving sets of equations, although this is generally an easy task. To
quickly illustrate this method, let us begin with the same equation:
2s 14 K K
1 2
s 1s 5 s 1 s 5
Now when we cross-multiply to remove the terms in the denominator, we can collect
coefficients of the different powers of s to generate our equations:
2s 14 K1 s 5 K2 s 1
K1 K2 2s 5K1 K2 14 0
Our two equations now become (with the two unknowns, K1 and K2 ):
K1 K2 2 and 5K1 K2 14
Subtracting the top from the bottom results in
4K1 12 or K1 3
Substituting K1 back into either equation, we get K2 1, exactly the same as
before. Once we have found K1 and K2 , the procedure to take the inverse Laplace
transform is identical and results in the same time solution to the original differential
equation. The method to use largely depends on which method we are most com-
fortable with.
Finally, it is quite simple using a computer package like Matlab to do the
partial fraction expansion. Taking our original transfer function from above, we
can use the residue command to get the partial fractions. The solution using
Matlab is as follows:
Transfer function:
2s 14 2s 14
Xs
s2 6s 5 s 1s 5
Matlab command:
>> R; P; K residue2 14; 1 6 5
and the output:
R P K
1 5
3 1
The results are interpreted where R contains the coefficients of the numerators and P
the poles (s+p) of the denominator. K, if necessary, contains the direct terms.
Writing R and P as before means we have the 1 divided by (s+5) and the 3 divided
by (s+1); this is exactly the result we derived earlier.
2s 14 3 1
Xs
s 1s 5 s 1 s 5
Analysis Methods for Dynamic Systems 93
The same command can be used for the cases presented in the following
sections.
3.4.2.2 Partial Fraction Expansion: Repeated Roots
To look at the second case we determine the response of a differential equation in
response to an input (nonhomogeneous) and assuming the initial conditions are zero.
We will take the simple first-order system as found when modeling the RC electrical
circuit and subject the system to a unit ramp input. In general terms our system can
be described by the following differential equation:
dV
0:2 V Ramp input
dt
Take the Laplace transform and solve for the output when the input is a unit ramp:
1
0:2 s 1Vs Unit ramp input
s2
(initial conditions are zero)
The output becomes:
1 5
Vs
s2 0:2s 1 s2 s 5
With repeated roots, the partial fraction expansion terms must include all
lower powers of the repeated terms. In this case then, the coefficients and terms
are written as
5 K1 K K
2
22 3
s s 5 s 5 s s
To solve for K1 we can multiply through by (s 5) and set s 5:
5 K2 s 5 K3 s 5
K 1
s2 s2 s
s5
5 1
K1 2
5 5
To solve for K2 we multiply through by s2 and set s 0.
" #
5 K1 s2
K2 K3 s
s 5 s 5
s0
5
K2 1
5
With the lower power of the repeated root, we now have a problem if we
continue with the same procedure. If we multiply both sides by s and let s 0,
the K2 term becomes infinite (division by zero) because an s term is left in the
denominator. To solve for K3 then, it becomes necessary to take the derivative of
both sides with respect to s and then let s 0. This allows us to solve for K3 .
Cross-multiply by s2 s 5 to simplify the derivative:
94 Chapter 3
5 K1 s2 K2 s 5 K3 s 5
Take the derivative with respect to s:
d
) 0 2 K1 s K2 K3 s K3 s 5
ds
Now we can set s 0 and solve for K3 , the remaining coefficient:
0 K2 5 K3 K2 1
1
K3
5
Using the coefficients allows us to write the response as the sum of three easier
transforms:
1 1 1 12
Vs 2
5 s 5 s 5s
And finally, we take the inverse Laplace transform of each to obtain the time
response:
1 1
Vt e5t t
5 5
As shown in the previous example, we can write and solve simultaneous equa-
tions instead of using the method shown above. For this example it means getting
three equations to solve for the three coefficients. If we multiply through by the
denominator of the left-hand side (as we did before we took the derivative), we
get the partial fraction expansion expressed as
5 K1 s2 K2 s 5 K3 ss 5
Now collect the coefficients of s to obtain the three equations:
K1 K3 s2 K2 5K3 s 5K2 5 0
The three equations (and three unknowns) are
K1 K3 0
K2 5K3 0
K2 1
Once again we get the same values for the coefficients and the inverse Laplace
transforms result in the same time response. For larger systems it is easy to write the
equations in matrix form to solve for the coefficients as illustrated below:
2 32 3 2 3 2 3 2 31 2 3 2 1 3
1 0 1 K1 0 K1 1 1 1 0 5
4 0 1 5 54 K2 5 4 0 5 and 4 K2 5 4 0 1 5 5 4 0 5 4 1 5
0 5 0 K3 5 K3 0 5 0 5 15
When written in matrix form there are many software packages and calculators
available for inverting the matrix and solving for the coefficients.
Analysis Methods for Dynamic Systems 95
K1 K2 s2 K1 K3 s K1 1 0
Now it is easy to solve for the coefficients:
K1 1 0 ! K1 1
K1 K2 0 ! K2 1
K1 K3 0 ! K3 1
The response can now be written as
1 s1
Ys 2
s s s1
When we look at the transform table we find that we are close but not quite
there yet. We need one more step to put it in a form where we can use the transforms
in table. Knowing the real and imaginary portion of our roots, we can write the
second-order denominator as
p2
2 2 12 3
s jRealj jImagj s s2 s 1 s a2 b2
2 2
Now we have two identities from the table that we can use:
96 Chapter 3
b sa
L1 eat sinbt and L1 eat cosbt
s a2 b2 s a2 b2
With one last step we have the form we need to perform the inverse Laplace.
Take the second-order term and break it into two terms in the form of the two
Laplace transform identities given above.
p
3
1 s 12 1
2 1 s 12 1 2
Ys 2 2 p p p
s s s 1 s s 1 s s 12 32 3 s 122 232
2 2
where
q p
rad 1 3
on 1 and z and od on 1 z2
sec 2 2
With the natural frequency and damping ratio known, the response of a second-
order system to a unit step input is (from Table 1)
s
ezon t 1 z2
q
1 p sinod t f od on 1 z2 f tan1
1 x2 z
This is the same response obtained using partial fraction expansion where the sine
and cosine terms have been combined into a sine term with a phase angle.
One of the more important connections we must make at this point is that we
actually knew this was the response from the very time we started the example, once
we calculated the roots of the second-order denominator. The real portion of the
roots equals zon and the imaginary portion of the roots is the damped natural
Analysis Methods for Dynamic Systems 97
frequency od . As we see in Section 3.4.3, this forms the foundation of using the
s-plane to determine a systems response in the time domain.
EXAMPLE 3.7
To conclude this section on Laplace transforms, let us once again use the mass-
spring-damper equation and now solve it using Laplace transforms. Remember
that the original differential equation, developed using several methods, is
f t m x00 b x0 k x
Then taking the Laplace transform with zero initial conditions:
Fs m s2 Xs b s Xs k Xs Input
Xs can easily be solved for since all the derivative terms have been removed
during the transform. Solving for Xs results in
Output Xs 1=m s2 b s k Input
If the input is a unit step: Unit Step 1=s.
Then the output is given by
1 1
Xs
ms2 bs k s
If we divide top and bottom by m we see a familiar result:
1
m 1
Xs b
2
s m s mk s
Aside from a scaling factor k, this is one of the entries in the table where (o2n
k/m) and 2 z on b/m). If the system is overdamped we have two real roots from
the second-order polynomial in the denominator and the system can be solved as the
sum of two first-order systems. When we have critical damping we have two repeated
real roots and again the solution was already discussed in Section 3.3.1.2. Finally, if
the system is underdamped we get complex conjugate roots and the system exhibits
overshoot and oscillation. Whenever first- or second-order systems are examined
with respect to step inputs, we can use the generalized responses developed in
Section 3.3.2. If different functions are used, then the partial fraction expansion
tools still allow us develop the time response of the system.
The most common format used when designing control systems is block dia-
grams using interconnected transfer functions. In our brief introduction to block
diagrams, we learned simple reduction techniques and how the block diagram is
simply representing some equation pictorially. Section 2.3.2 presented some of the
basic properties and reduction steps. The goal in this section is to learn what the
actual blocks represent and how to develop them.
Block diagrams are lines representing variables that connect blocks containing
transfer functions. A transfer function is simply a relationship between the output
variable and input variable represented in the s-domain.
General notation is
Transfer function Gs Cs=Rs
EXAMPLE 3.8
Taking the Laplace of this ODE leads to a transfer function and block as shown:
m s2 Xs b s Xs k Xs Input Rs
Xs=Rs 1=ms2 bs k
With a uniform set of units, Rs is a force input, Cs is a position output, and the
coefficients m, b, and k must each be consistent with Rs and Cs. Each s is
associated with units of 1/sec.
EXAMPLE 3.9
Another example that we have already derived the differential equation for is a first-
order RC circuit. Taking the differential equation and following the same procedure,
Analysis Methods for Dynamic Systems 99
RC dc=dt c rt
RC s 1Cs Rs
C=R 1=RC s 1
Now the units of both Rs and Cs are volts, where their relationship to each other
is defined by the transfer function in the block. Once we know the input R(s) we can
develop the output Cs. If we pictorially represent the input as a unit step change in
voltage, then the expected output voltage is a first-order step response, as shown in
Figure 7.
So now we are getting to the point where, as alluded to in the previous section,
we are able to look at the form of our transfer function and quickly and accurately
predict the type of response that we will have for a variety of inputs.
3.4.3.1 Characteristics of Transfer Functions
Several notes at this point about the Laplace transform and corresponding transfer
function will help us understand future sections when designing control systems. The
denominator of the transfer function Gs is usually a polynomial in terms of s where
the highest power of s relates to the order of the system. Hence, the mass-spring-
damper system is a second-order system and has a characteristic equation (CE) of
CE ms2 bs k. The roots of the CE directly relate to the type of response the
system exhibits. Looking in the Laplace transform tables clarifies this more. For
example, the first-order system transfer function can be written as a/(s a). This
corresponds to the time response of eat . The root s a of the characteristic
equation relates directly to the rate of rise or decay in the system and is thus related
to the system time constant, t, where t 1=a. The same relationship between roots
and the system response is found in Table 1 for second-order systems like the mass-
spring-damper system. If the roots of a second-order CE are both negative and real,
the system behaves like two first-order systems in series. If the roots have imaginary
components, they are complex conjugates according to the quadratic equation and
the system is underdamped and will experience some oscillation. If the real portion
of the roots are ever positive, the system is unstable since the time response now
includes a factor eat , thus experiencing exponential growth (until something breaks).
These relationships were formed while presenting partial fraction expansions and
form the foundation for the root locus technique presented later.
The roots of the characteristic equation are often plotted in the s-plane. The
s-plane is simply an xy plotting space where the axes represent the real and ima-
ginary components of the roots of the characteristic equation. This is shown in
Figure 8. The parameters used to describe first- and second-order systems are all
graphically present in the s-plane. The time constant for a first-order system (and the
decay rate for a second-order system) relates to the position on the real axis. The
imaginary axis represents the damped natural frequency, the radius (distance) to the
complex pole is the natural frequency, and the cosine of the angle between the
negative real axis and the radial line drawn to the complex pole is the damping
ratio. Thus, the s-plane is a quick method of visually representing the response of
our dynamic system.
Since anything with a positive real exponent will exhibit exponential growth,
the unstable region includes the area to the right of the imaginary axis, commonly
referred to as the right hand plane, RHP. In the same way, if all poles are to the left
of the imaginary axis, the system is globally stable since all poles will include a term
that decays exponentially and that is multiplied by the total response. (Thus, when
the decaying term approaches zero, so does the total response.) This side is com-
monly termed to left-hand plane, or LHP. The further to the left the poles are in the
plane, the faster they will decay to a steady-state value, a property well worth
knowing when designing controllers. Figure 9 illustrates the types of response
depending on pole locations in the s-plane.
There are two more useful theorems for analyzing control systems represented
with block diagrams: the initial value theorem and final value theorem. These theo-
rems relate the s-domain transfer function to the time domain without having to first
take the inverse Laplace transform.
Initial value theorem (IVT):
f 0 lim s Fs
s!1
The roots of the characteristic equation are often plotted in the s-plane. The
s-plane is simply an xy plotting space where the axes represent the real and ima-
ginary components of the roots of the characteristic equation. This is shown in
Figure 8. The parameters used to describe first- and second-order systems are all
graphically present in the s-plane. The time constant for a first-order system (and the
decay rate for a second-order system) relates to the position on the real axis. The
imaginary axis represents the damped natural frequency, the radius (distance) to the
complex pole is the natural frequency, and the cosine of the angle between the
negative real axis and the radial line drawn to the complex pole is the damping
ratio. Thus, the s-plane is a quick method of visually representing the response of
our dynamic system.
Since anything with a positive real exponent will exhibit exponential growth,
the unstable region includes the area to the right of the imaginary axis, commonly
referred to as the right hand plane, RHP. In the same way, if all poles are to the left
of the imaginary axis, the system is globally stable since all poles will include a term
that decays exponentially and that is multiplied by the total response. (Thus, when
the decaying term approaches zero, so does the total response.) This side is com-
monly termed to left-hand plane, or LHP. The further to the left the poles are in the
plane, the faster they will decay to a steady-state value, a property well worth
knowing when designing controllers. Figure 9 illustrates the types of response
depending on pole locations in the s-plane.
There are two more useful theorems for analyzing control systems represented
with block diagrams: the initial value theorem and final value theorem. These theo-
rems relate the s-domain transfer function to the time domain without having to first
take the inverse Laplace transform.
Initial value theorem (IVT):
f 0 lim s Fs
s!1
time continues is equal to the output in the s-domain times s as s in the limit
approaches zero. In almost every case you can determine the steady-state output of a
system by multiplying the transfer function (TF) times s and the input (in terms of s)
and setting s to zero. The resulting value is the steady-state value that the system will
reach in the time domain. For step inputs this becomes very easy since the s in the
theorem cancels with the 1/s representing the step input. Thus, for a unit step input
the final value of the transfer function is simply the value of the TF when s ! 0.
With the tools described up to this point we can now build the block diagram,
determine the content of each block, close the loop (as our controller ultimately will),
and reduce the block diagram to a single block to easily determine the closed loop
dynamic and steady-state performance.
To work through the application of the FVT, lets solve for the steady-state
output using the two examples discussed in previous sections, the RC circuit and the
m-b-k mechanical system.
EXAMPLE 3.10
We will take the transfer function and block diagram for the RC circuit but now with
a step of 10 V in magnitude. This is the equivalent of closing a switch at t 0 and
measuring the voltage across the capacitor. The transfer function, from before
1 10
Step input with a magnitude of10 ! Rs 10
s s
1 10
Csteady state lim ct lim s Cs lim s 10
t!1 s!0 s!0 RCs 1 s
Although no surprises here, the concept is clear and the FVT is easy to use and apply
when working with block diagrams. We finish our discussion of the FVT by applying
it to the mass-spring-damper system developed earlier.
EXAMPLE 3.11
Taking the Laplace of the m-b-k system differential equation resulted in the transfer
function:
1 F
Step input (force) with a magnitude of F ! Rs F
s s
Apply the final value theorem to the output, C(s):
1 1 F
Csteady state lim ct lim s Cs lim s
t!1 s!0 s!0 ms2 bs k s k
This simply tells us what we could have ascertained from the model: that after
all the transients decay away, the final displacement of the mass is the steady-state
force divided by the spring constant where the force magnitude F. This agrees with
the steady-state value determined from the differential equations in previous sec-
tions.
While these are simple examples to illustrate the procedure, the method is
extremely fast even when the block diagrams get large and more complex. The
FVT is frequently used in determining the steady-state errors for closed loop con-
trollers.
3.4.3.2 Common Transfer Functions Found in Block Diagrams
Finally, let us examine several common block diagram transfer functions. Several
blocks are often found in block diagrams representing control systems, some of
which tend to confuse beginners. Each block described here may be repeated
throughout the block diagram, each time representing a different component and
with different physical units. The goal here is not to list all the possible applications
of each block but instead to have us recognize the basic common forms found in all
different kinds of systems (electrical, mechanical, etc.). For example, if we under-
stand a first-order lag term, we will understand its input/output relationship whether
it represents an RC circuit, shaft speed with rotary inertia, or phase-lag electronic
controller. The other note to make here is that all systems can be reduced into
combinations of these blocks. If we have a fifth-order polynomial (characteristic
equation) in the denominator of our transfer function, we have several combinations
possible when it is factored: five real roots corresponding to five first-order terms,
three real roots and one complex conjugate pair corresponding to three first-order
terms and one second-order oscillatory term, or one real root and two complex
conjugate pairs corresponding to one first-order term and two oscillatory terms.
Analysis Methods for Dynamic Systems 103
The gain, K, is a basic block and may represent many different functions in a
controller. This block may multiple R(s) by K without changing the variable type
(e.g., a proportional controller multiplying the error signal) or represent an amplifier
in the system that associates different units on the input and output variables. An
example is a hydraulic valve converting an electrical input (volts or amps) into a
corresponding physical output (pressure or flow). The valve coefficients resulting
from linearizing the directional control valve are used in this manner. Therefore,
when using this block be sure to recognize what the units are supposed to be and
what the gain units actually were determined with. There are no dynamics associated
with the gain block; the output is always, and without any lead or lag, a multiple K
of the input.
Integral:
This block represents a time integral of the input variable where ct rt dt. Two
common uses include integrating the error signal to achieve the integral term in a
proportional-integral-derivative (PID) controller and integrating a physical system
variable such as velocity into a position. In terms of units, then, it multiplies the
input variable by seconds. If input is an angular acceleration, rad/sec2 , output is
angular velocity, rad/sec. Since most physical systems are integrators (remember the
physical system relationships from Chapter 2), this is a common block.
One special comment is appropriate here. The integral block is not to be
confused with step inputs even though both are represented by 1/s. The block con-
tains a transfer function that is simply the ratio of the output to the input. Thus it is
possible to have an integral block with a step input in which case the output would
be represented by
Cs Gs Rs 1=s1=s 1=s2 ramp output (from tables)
This concept is sometimes confusing when initially learning block diagrams and s
transforms since one 1=s term is the system model and the other 1=s term is the input
to the system.
Derivative:
This block represents a derivative function where the output is the derivative of the
input. A common use is in the derivative term of a PID controller block. Use of
104 Chapter 3
the block requires caution since it easily amplifies noise and tends to saturate
outputs. The pattern should be noted where the integral and derivative blocks
are inverses of each other and if connected in series in a block diagram, would
cancel. The units associated with the derivative block are 1/sec, the inverse of the
integral block.
First-order system (lag):
This block is commonly used in building block diagrams representing physical sys-
tems. It might represent an RC circuit as already seen, a thermal system, liquid level
system, or rotary speed inertial system. The input-output relationship for a first-
order system in the time domain has already been discussed in Section 3.3.2. Based
on the time constant, t, we should feel comfortable characterizing the output from
this system. In the next section when we examine frequency domain techniques, we
will see that the output generally lags the input (except at very low frequencies) and
hence this transfer function is often called a first-order lag.
First-order system (lead):
This block is found in several controllers and some systems. It is similar to the first-
order lag except that now the output leads the input. Most physical systems do not
have this characteristic as real systems usually exhibit lag, as found in the previous
block. The similarities and differences will become clearer when these blocks are
examined in the frequency domain.
Second-order system:
EXAMPLE 3.12
To summarize many of the concepts presented thus far, let us take a model of a
physical system, develop the differential equation describing the physics of the sys-
tem, convert it to a transfer function, and plot the time response if the system is
subjected to a step input. The system we will examine is a simple rotary group with
inertia J and damping B; a torque, T, as shown in Figure 10, acts on the system.
To derive the differential equation, we sum the torques acting on the system
and set them equal the inertia multiplied by the angular acceleration.
X do
T 0s J T Bo
dt
Now we can take the Laplace transform (ignoring initial conditions) and solve for
the output, o:
do 1
J B o T ! Js Bo T ! o T
dt Js B
! s 1 1 1 1 1
T s Js B B J=Bs 1 B ts 1
This results in a first-order system time constant, t J=B, and a scaling factor of
1=B, allowing us to quickly write the time response as
1 1
1 B
ot 1 e t 1 e Jt
B B
Finally, we can plot the generalized response, including the scaling factor, as
shown in Figure 11. So without needing to perform an inverse Laplace transform, we
have analyzed the rotary system, developed the transfer function, and plotted the
time response to a step input. Since when dealing with linear systems separate
responses can be added, even complex systems are easily analyzed with the skills
shown thus far. Complex systems always factor into a series of simple systems (as
outlined above) whose individual responses are added to form the total response.
f Gs tan1 [Imag/real]
Since s in the Laplace domain is a complex variable, the magnitude and phase
relationship can be more clearly shown graphically using phasors as in Figure 12.
Since the multiplication of a phasor by imaginary number j corresponds to a coun-
terclockwise rotation of 90 degrees, we see that j j 1, the identity we are already
familiar with. The first imaginary j is a vertical line of magnitude 1, rotated 90
degrees by multiplication with the other j means it still has a magnitude of 1 but
now is on the negative real axis, hence equal to 1. When we construct Bode plots we
let s jo since the entire real term, s, present in the complex variable s, has decayed
(i.e., steady state). Inserting j o into the transfer functions then allows us to construct
Bode plots of magnitude and phase as o is increased.
Fortunately, the common factors representing real physical models are quite
simple, and it is seldom necessary to worry about phasors, as we see in the next
section. The key points to remember from this discussion are that the real portions of
the s terms decay out (they are the coefficient on the decaying exponential term) and
thus each s term, represented now as an imaginary term s jo, introduces 90 degrees
of phase between the input and the output. With this simple understanding we are
ready to relate the common transfer functions examined earlier to the equivalent
Bode plots in the frequency domain.
the output signal is greater in magnitude than the input and a negative dB implies
that the output signal is of a lesser magnitude than the input signal.
The phase plot, commonly the lower trace, uses a linear y axis to plot the actual
angle versus the log of the frequency. The magnitude and phase plots share the same
frequency since the data are generated (analytically or experimentally) together. A
positive phase angle means that the output signal leads the input signal and vice
versa for a negative phase angle, commonly termed lag. This will become clearer as
example plots are generated.
To begin the process of constructing and using Bode plots, we start with our
existing block diagram and transfer function knowledge and extend it into the fre-
quency domain. The advantages will be evident once we understand the process. As
we have seen thus far, most physical systems can be factored in subsystems, or
factors. Common blocks (gain, integral, first order, and second order) were pre-
sented in the previous section when discussing block diagrams. These same transfer
functions (blocks) are the building blocks for constructing Bode plots. Now for the
advantage: The blocks multiply when connected in series (as when factored and in
block diagrams) but they are plotted on a logarithmic scale when using a Bode plot.
Multiplication becomes addition when using logarithms!
Gain factor, K:
A transfer function representing a gain block produces a Bode plot where the mag-
nitude represents the gain, K, and the phase angle is always zero, as shown using the
equations for magnitude and phase angle given below.
1
MagdB 20 log K 20 log
K
Im
Phase f tan1 tan1 0 0 degrees
Re
The phase angle is always zero for a gain factor K since no imaginary term is
present and the ratio of the imaginary to the real component is always zero. Figure
14 gives the Bode plot representing the gain factor K.
Since individual effects add, varying the gain K in a system only affects the
vertical position of the magnitude plot for the total system response. A different
value for K does not change anything on the phase angle plot, as the phase angle
contribution is always zero. The example plot given in Figure 14 illustrates this
graphically. When K represents the proportional gain in a controller, we can define
stability parameters that make it easy to find what proportional gain will make the
system go unstable given a Bode plot for the system. Using phasor notation, the gain
is represented by a line on the horizontal positive x axis (zero phase angle) with a
length (magnitude) K.
Integral:
The integral block produces a line on the magnitude plot having a constant slope of
20 dB/decade along with a line on the phase angle plot at a constant 90 degrees.
Remember that s is replaced by jo in the transfer function and as o is increased the
magnitude will decrease. The slope tells us that the line falls 20 dB (y scale) for
every decade on the x axis (log of frequency). A decade is between any multiple of 10
(0.1 to 1, 1 to 10, 5 to 50, etc.). The slope of 20 comes from the equation used to
calculate the dB magnitude of the output/input ratio.
1
MagdB 20 log 20 log jjoj
jo
1 Im
o
Phase f tan tan1 90 degrees
Re 0
The integrator Bode plot is shown in Figure 15. The line will cross the 0 dB line
at o 1 rad/sec since the magnitude of 1=jo 1 and the log of 1 is 0. Remember
that this is the amount added to the total response for each integrating factor in our
system. The phase angle contribution was explained earlier using phasors, where we
saw that each s (or jo in our case) contributes 90 degrees of phase. That is why two
imaginary numbers multiplied by each other equals 1 (a phasor of magnitude one
along the negative real axis). When the imaginary number is in the denominator, the
angle contribution becomes negative instead of positive (j j 1 is still true, it just
rotates clockwise 90 degrees for each s instead of counterclockwise).
Understanding this concept makes the remaining terms easy to describe.
Derivative:
A derivative block, s, will have a positive slope of 20 dB/decade and a constant +90
degrees phase angle. Since jo is now in the numerator, increasing the frequency
increases the magnitude of the factor. As the derivative factor Bode plot in Figure
16 shows, the magnitude plot still crosses 0 dB at o 1 rad/sec because the magni-
tude of the factor is still equal to unity at that frequency. The magnitude and phase
angle equations given for the integrator block are the same for the derivative block,
the only exception that the negative signs are positive loga log1=a.
The same explanation as before also holds true regarding the phase angle,
except for now the imaginary j is in the numerator and contributes a positive 90
degrees. In fact, if we look we see that a factor in the numerator is just the horizontal
mirror image of that same factor in the denominator. For all remaining factors this
property is true; each magnitude and phase plot developed for a factor in the
numerator, when flipped horizontally with respect to the zero value (dB or degrees),
becomes the same plot as for when the factor appears in the denominator. Thus
when the same factor, appearing once in the numerator and once in the denominator
are added, the net result is a magnitude line at 0 db and a phase angle line at 0
degrees. This relationship is also evident in s-domain using transfer functions; multi-
plying an integrator block (1=s) times a derivative block (s) produces a value of unity,
hence a value of 0 dB and 0 degrees. (Adding factors in the frequency domain is the
same as multiplying factors in the s-domain.)
First-order system (lag):
Remembering once again that s is replaced by jo helps us to understand the plots for
this factor. At low frequencies the magnitude of the jo term is relatively very small
compared to the 1 and the overall factor is close to unity. This produces a low
frequency asymptote at 0 dB and a phase angle of zero degrees. As the frequency
increases, the ts term in the denominator begins to dominate and it begins to look
like an integrator with a slope of 20 dB/decade and a phase angle of 90 degrees.
Plotting this on the logarithmic scale produces relatively straight line asymptotic
segments, as shown in Figure 17. Therefore we commonly define a low and high
frequency asymptote, used as straight line Bode plot approximations.
The break point occurs at o 1=t since the contribution from the two terms in
the denominator are equal here. The real magnitude curve is actually 3 dB down at
this point, and at points o 0:5=t and o 2=t (an octave of separation on each
side) the actual curve is 1 dB beneath the asymptotes. To calculate the exact values,
we can use the following magnitude and phase equations, similar to before:
1 q
Magd B 20 log 20 log ot2 1
jot 1
Phase f tan1 ot
Since the phase angle is negative and increases as frequency increases, we call this a
lag system. This means that as the input frequency is increased the output lags
(follows), the input by increasing amounts of degrees (time). The phase angle is
112 Chapter 3
When the first-order factor is in the numerator, it adds positive phase angle and the
output leads the input. The magnitude and angle plots are the mirror images of the
first-order lag system, as the equations also reveal:
q
MagdB 20 log jjot 1j 20 log ot2 1
Phase f tan1 ot
The magnitude plot still has a low frequency asymptote at 0 dB but now
increases at 20 dB/decade when the input frequency is beyond the break frequency.
The phase angle begins at 0 and ends at 90 degrees and the output now leads the
input at higher frequencies. These characteristics are shown on the Bode plot for the
first-order lead system in Figure 18.
The same errors (sign is opposite) are found between the low and high fre-
quency asymptotes as discussed for the first-order lag system and the same explana-
tions are valid. The first-order lag and lead Bode plots are frequently used elements
when designing control systems and knowing how the phase angle adds or subtracts
allows us to easily design phase lead or lag and PD or PI controllers using the
frequency domain.
Second-order system:
Cs 1
or
Rs 1 2 2z
s s 1
o2n on
Finally, we have the second-order system. As in the step response curves for second-
order systems, we again have multiple curves to reflect the two necessary parameters,
Analysis Methods for Dynamic Systems 113
natural frequency and damping ratio. Each line in Figure 19 represents a different
damping ratio. Several analogies can be made from our experience with the previous
factors, only now there are three terms in denominator, as the magnitude and phase
angle equations show.
s
1 2
o2 2zo 2
1
2 2z
MagdB 20 log 2 jo jo 1 20 log 1 2
on on on on
0 1
o
B 2z
B on C
C
Phase f tan1 B
o2 A
C
@
1 2
on
At low frequencies, both s (or jo) terms are near zero, the factor is near unity
(o2n =o2n ), and can be approximated by a horizontal asymptote. At high frequencies
the 1=s2 term dominates and we now have twice the slope at 40 dB/decade for the
high frequency asymptote. The phase angle similarly begins at zero but now has j*j
in the high frequency term and ends at 180 degrees, or twice that of a first-order
system. Thus for each 1=jo in the highest order term in the denominator, another
90 degrees of lag is added. Therefore a true first-order system can never be more
than 90 degrees out of phase and a second-order system never more than 180
degrees.
Figure 19 also allows us to determine the natural frequency and damping ratio
by inspection. This is developed further when we discuss how to take our Bode plot
and derive the approximate transfer function from the plot. The intersection of the
low and high frequency asymptotes occurs near the natural frequency (as does the
peak), and if any peak in the magnitude plot exists, the system damping ratio is less
than 0.707. At the natural frequency (i.e., breakpoint) the phase angle is always 90
degrees, regardless of the damping ratio. If the system is overdamped, it factors into
two first-order systems and can be plotted as those two systems. Thus the second-
order Bode plots only show the family of curves where z 1.
Although not explicitly shown here, with the second-order term appearing in
the numerator we get the same low frequency asymptote, a slope of 40 dB/decade
on the high frequency asymptote and a phase angle starting at zero and ending at
114 Chapter 3
180 degrees. This is exactly what we expect after seeing how the other Bode plot
factors appear in the denominator and the numerator.
3.5.1.2 Constructing Bode Plots from Transfer Functions
When we reach the point of designing controllers in the frequency domain, we must
often plot the different factors together to construct Bode plots representing the
combined controller and physical system. In this section we look at some guidelines
to make the procedure easier. The most fundamental approach is to develop a Bode
plot for every factor in our controller and physical system and add them all together
when we are finished. In general this is the recommended procedure. There are also
guidelines we can follow that can usually speed up the process and, at a minimum,
provide useful checks when we are finished. Several guidelines are listed below:
To find the low-frequency asymptote, use the FVT to determine the steady-
state gain of the whole transfer function and convert it to dB. The FVT,
Analysis Methods for Dynamic Systems 115
when applied to the whole system, gives us the equivalent steady-state gain
between the system output and input. If the gain is greater than one, the
output level will exceed the input level. This gain may be comprised of
several different gains, some electronic and some from gains inherent in
the physical system. Converting the gain into decibels should give us the
value of the low frequency asymptote found by adding the individual Bode
plots.
If we have one or more integrators in our system, the gain approaches
infinity using the FVT. Each integrator in the system adds 20 dB/decade
of slope to the low frequency asymptote. Therefore if we have a low fre-
quency asymptote with a slope of 40 dB/decade, it means we should have
two integrators in our system (1=s2 )
Recognize that each power of s in the numerator ultimately adds 90 degrees
of phase and a high frequency asymptotes contribution of 20 dB/decade
and the opposite for each power of s in the denominator. For example, a
third-order numerator and a fourth-order denominator appear as a first-
order system at high frequencies with a final high frequency asymptote of
20 dB/decade and a phase angle of 90 degrees. Therefore,
1. The high frequency asymptote will be 20 dB/decade times (n m) where
n is the order of the denominator and m is the order of the numerator.
2. The high frequency final phase angle achieved will be 90 degrees times
(n m).
Even with constructing the individual Bode plots, much of the overall system
can be understood in the frequency domain by applying these simple guidelines. As
discussed further, the process can be reversed and a Bode plot may be used to derive
the transfer function for the system.
EXAMPLE 3.13
Let us now take a transfer function and develop the approximate Bode plot to
illustrate the principles learned in this section. We use the transfer function below,
which can be broken down into four terms: a gain K, an integrator, a first-order lead
term, and a first-order lag term. The Bode plot will be constructed using the approx-
imate straight line asymptotes for each term.
10s 1
Gs
s0:1s 1
To begin with, let us develop a simple table showing the straight line magni-
tude and angle approximations for each term (Table 2). The gain factor is plotted
as a line of constant magnitude at 20 dB ( 20 log 10) and the phase angle
contribution is zero for all frequencies. The integrator has a constant slope of
20 dB/decade and crosses 0 dB at 1 rad/sec, as shown in Table 2. Its angle
contribution is always 90 degrees. The first-order lead term in the numerator
has its break frequency at 1 rad/sec (t 1 sec, break is at 1=t). It is a horizontal
line at 0 dB before 1 rad/sec and has a slope of 20 dB/decade after the break
frequency. The phase angle begins at 0 degrees one decade before the break and
ends at 90 degrees one decade after the break frequency. Finally, the first-order
116 Chapter 3
0.1 20 20 0 0 40
1 20 0 0 0 20
10 20 20 20 0 20
100 20 40 40 20 0
0.1 0 90 0 0 90
1 0 90 45 0 45
10 0 90 90 45 45
100 0 90 90 90 90
lag term has a time constant t 0:1 sec and thus a break frequency at 10 rad/sec.
After its break frequency, however, it has a slope of 20 dB/decade. The angle
contribution is also negative, varying from 0 to 90 degrees one decade before and
after the break frequency of 10 rad/sec.
Once the individual magnitude and phase angle contributions are calculated,
they can simply be added together to form the final magnitude and phase angle plot
for the system. The total column for the magnitude and phase angle plot, shown in
Table 2, thus defines the final Bode plot values for the whole system. Graphically,
each individual term and the final Bode plot are plotted in Figure 20.
Checking our final plot using the guidelines above, we see that our low fre-
quency asymptote has a slope of 20 dB/decade, implying that we have one inte-
grator in our system. The high frequency asymptote also has a slope of 20 dB/
decade, meaning that the order of the denominator is one greater than the order of
the numerator. Since the phase angle does increase at some range of frequencies on
the final plot, we also know that we have at least one s term in the numerator adding
positive phase angle.
In this case, knowing the overall system transfer function that we started with,
we see that all the quick checks support what is actually the case. Our system has one
integrator, a term in the numerator, and is an order of one greater in the denomi-
nator than in the numerator (order of two in the denominator versus an order of one
in the numerator). As we reverse engineer the system transfer function from an
existing Bode plot in later sections, we see that these guidelines from the beginning
point for the procedure.
In conclusion of this section, all transfer functions can be factored into the
terms described and each term plotted separately on the Bode plot. The final plot
then is simply the sum of all individual plots, both magnitude and phase angle.
Analysis Methods for Dynamic Systems 117
Bandwidth frequency (definition varies, but here are the most common)
Frequency at which the magnitude is 3dB relative to the low frequency
asymptote magnitude.
Frequency at which the magnitude is 3dB relative to the maximum dB
magnitude reached. In the case of second-order factors where z 0:35 and
the peak exceeds 3dB, the bandwidth is often considered the range of
frequencies corresponding to the 3dB level before and after the peak
magnitude.
118 Chapter 3
separate in Bode plots). The disadvantage is that when plotting a Nyquist plot using
data from a Bode plot, not all data are used and the procedure is difficult to reverse
unless enough frequencies are labeled on the Nyquist plot during the plotting pro-
cedure. Since the data on a Nyquist plot do not explicitly show frequency, the
contribution of each individual factor is not nearly as clear on a Nyquist plot as it
is on a Bode plot.
For these reasons, and since Bode plots are more common, this section only
demonstrates the relationship between Nyquist and Bode plots. The majority of our
design work in the frequency domain will continue to be done in this text while using
Bode plots.
Perhaps the simplest way to illustrate how a Nyquist plot relates to a Bode plot
is to begin with a Bode plot and construct the equivalent Nyquist plot. Before we
begin, let us quickly define the setup of the axes on the Nyquist plot. The basis for a
120 Chapter 3
Nyquist plot has already been established in Figure 12 when discussing phasors. As
we recall, a phasor has both a magnitude and phase angle plotted on xy axes. The x
axis represents the real portion of the phasor and the y axis the imaginary portion. In
terms of our phasor plot, a magnitude of one and a phase angle of zero is the vector
from the origin, lying on the positive real axis, with a length of one.
To construct a Nyquist plot, we simply start at our lowest frequency plotted on
the Bode plot, record the magnitude and phase angle, convert the magnitude from dB
to linear, and plot the point at the tip of the vector with that magnitude and phase
angle. As we sweep through the frequencies from low to high, we continue to plot
these points until the curve is defined. The end result is our Nyquist, or polar, plot.
EXAMPLE 3.14
To step through this procedure, let us use the Bode plot used to define bandwidth as
shown in Figure 21, which plots an underdamped second-order system. To begin
with, we record some magnitude and phase angles, as given in Table 3, at a sampling
of frequencies from low to high.
Once we have the data recorded and the magnitude linearly represents the
output/input magnitude ratio, we can proceed to develop the Nyquist plot given
in Figure 22. The first point plotted from data measured at a frequency of 0.1 rad/sec
has a magnitude ratio of 1.01 and a phase angle of 2:3 degrees. This is essentially a
line of length 1 along the positive real axis, as shown on the plot. At a frequency of
0.8 rad/sec, the curve passes through a point a distance of 2.08 from the origin and at
an angle of 41:6 degrees.
The remaining points are plotted in the same way. The magnitude defines the
distance from the origin and the phase angle defines the orientation. Since our phase
angles are negative (as is common), our plot progresses clockwise as we increase the
frequency. By the time we approach the higher frequencies, the magnitude is near
zero and the phase angle approaches 180 degrees. We are approaching the origin
from the left as o ! 1.
If we understand the procedure, we should see that it is also possible to take a
Nyquist plot and generate a Bode plot, as long as we are given enough frequency
points along the curve. In general though we lose information by going from a Bode
plot to a Nyquist plot. Many computer programs, when given the original magnitude
and phase angle data, are capable of generating the equivalent Bode and Nyquist
plots.
Table 3 Example: Magnitude and Phase Values Recorded from Bode Plot
Advantages
Easier on equipment than step responses
Step responses tend to saturate actuators and components
More information available from test
Allows higher order system models to be constructed
Disadvantages
More difficult experiment
Takes more time to construct a Bode plot than a step response
More difficult analysis
Design engineer needs to understand the resulting Bode plot
As mentioned in the previous section, the Bode plots graph the relationship
between the input and output magnitude and phase angle as a function of input fre-
quency. What follows here is a brief description of how to accomplish this in practice.
The input signal is a sinusoidal waveform of fixed amplitude whose frequency
is varied. At various frequencies the plots are captured and analyzed for amplitude
122 Chapter 3
ratios and phase angles. Thus each point used to construct a Bode plot requires a
new setting in the test fixture (generally just the input frequency). It is important to
wait for all transients to decay after changing the frequency. Once the transients have
decayed the result will be a graph similar to the one given in Figure 23. This plot will
result in two data points, a magnitude value and phase angle value (at one fre-
quency), used to construct the Bode plot. This plot is typical of most physical
systems since the output lags the inputs (higher order in the denominator) and the
output amplitude is less than the input.
EXAMPLE 3.15
The data points required for the development of the Bode plot are found as follows
from the plot in Figure 23:
Test frequency: o 2p rad/2 sec p rad/sec (plotted on horizontal log scale)
dB Magnitude: MdB 20 logjYj=jXj 20 log0:5=1:0 6:0 dB
(Peak-to-peak values may also be used for Y=X)
Phase angle: Phase angle f 360 degrees/2 sec) (0.25 sec lag) 45
degrees
These points (6 dB on the magnitude plot and 45 degrees on the phase plot,
both at a frequency of o p rad/sec) would then be plotted on the Bode plot and the
frequency changed to another value. The process is repeated until enough points are
available to generate smooth curves. Remember that we plot the data on a loga-
rithmic scale so for the most efficient use of our time we should space our frequencies
accordingly.
Now that we have our Bode plot completed, it can be used to develop a transfer
function representing the system that was tested. Whereas models resulting from step
response curves are limited to first- or second-order systems, the Bode plots may be
used to develop higher order models. This is a large advantage of Bode plots when
compared with step response plots when the goal is developing system models. The
following steps may be used as guidelines for determining open loop system models
from Bode plots.
Developing the transfer function from the Bode plot is as follows:
Step 1: Approximate all the asymptotes on the Bode plot using straight lines.
Of special interest are the low and high frequency asymptotes.
Step 2: If the low frequency asymptote is horizontal, it is a type 0 system (no
integrators) and the static steady-state gain is the magnitude of the low
frequency asymptote. If the low frequency asymptote has a slope of 20
dB/decade, then it is a type 1 system and there is one integrator in the system.
If we remember how each factor contributes, then it is fairly easy to recog-
nize the pattern and the effect that each factor adds to the total.
Step 3: If the high frequency asymptote has a slope of 20 dB/decade, the order
of the denominator is one greater than the order of the numerator. If the
slope is 40 dB/decade, the difference is two orders between the denominator
and numerator. To estimate the order of the numerator, examine the phase
angle plot. If there is a factor in the numerator, the slope at some point
should be positive. In addition, the magnitude plot should exhibit a positive
slope also. If enough distance (in frequency) separates the factors, each order
in the numerator will have a 20 dB/decade portion showing in the magni-
tude plot and 90 degrees of phase added to the total phase angle. This is
seldom the case since factors overlap and judgment calls must be made based
on experience and knowledge of the system. Drawing all the clear asymptotic
(straight line) segments usually helps to fill in the missing gaps.
Step 4: With the powers of the numerator and denominator now determined,
see if any peaks occur in the magnitude plot. If so, one factor is a second-
order system whose natural frequency and damping ratio can be approxi-
mated. The magnitude of the peak, relative to the lower frequency asymptote
preceding it, determines the damping ratio as
1
Mp dB pffiffiffiffiffiffiffiffiffiffiffiffi
2z 1 z2
Mp is the distance in decibels that the peak value is above the horizontal
asymptote preceding the peak. As the damping ratio goes to zero, the peak
magnitude goes to infinity and the system becomes unstable. To calculate the
damping ratio, the peak magnitude is used in the same way that the percent
overshoot was used for a step response in the time domain. The graph
showing Mp (in dB) versus the damping ratio is given in Figure 24. The
natural frequency, od , occurs close to the ppeak
ffiffiffiffiffiffiffiffiffi(damped)
ffiffiffi frequency, od ,
and is shifted by the damping ratio, od on 1 z2 . The natural frequency
can easily be found independent of the damping ratio by extending the low
and high frequency asymptotes of the second-order system in question. The
intersection of the two asymptotes occurs at the natural frequency of the
system.
Step 5: Fill in the remaining first-order factors by locating each break in the
asymptotes. Each break corresponds to the time constant for that factor.
124 Chapter 3
Look for shifts of 20 dB/decade to determine where the first-order factors
are in your system. If the asymptote changes from 20dB to 40 dB/decade
(without a peak), then there likely is a first-order break frequency located at
the intersection of two asymptotes defining the shift in slopes.
amplitudes remaining within the expected operating range. In some components with
large amounts of deadband, like proportional directional control valves, the test
should take place in the active region of the valve. The data are relatively mean-
ingless when performed in the deadband of this type of component. For some
components the frequency response curves change little through out the full operat-
ing region, while other may change significantly (linear versus nonlinear systems). A
general rule of thumb is to center the input signal offset in the middle of the com-
ponents active range and vary the amplitude 25% of the active range. Chapter 12
discusses electrohydraulic valves and deadband in much more detail.
Xs sI A1 B Us
This is the output of our states, so substitute into the output equation for Y:
Ys Cs I A1 B Us DUs or Ys Cs I A1 B D Us
The transfer function is simply the output divided by the input, or
Ys=Us or Cs I A1 B D
so
Gs C s I A1 B D
It is relatively straightforward to get a transfer function from our state space
matrices, the most difficult part being matrix inversion. For small systems it is
possible to invert the matrix by hand where the inverse of a matrix is given by
Adjo intM
M 1 ; jMj Determinant
jMj
More information on matrices is given in Appendix A.
EXAMPLE 3.16
To illustrate this procedure with an example, let us use our mass-spring-damper state
space model already developed and convert it to a transfer function. Recalling the
mass-spring-damper system and the state matrices already developed earlier:
" 0 1 x 0
# " #
x_ 1 x1
1
k b 1 u and Y 1 0 0 u
x_ 2 x2 x2
m m m
Then:
Gs Cs I A1 B D
#)1 "
0 1 0
( " #
s 0
Gs 1 0 k b 1 0
0 s
m m m
#1 "
s 0
" #
1
Gs 1 0 k b 1
s
m m m
To invert the matrix, we take the adjoint matrix divided by the determinant:
2 b 3
s 1
6 m 7
k
4 5
#1
s s
"
1
k b m
s b k
m m s2 s
m m
Simplifying:
Analysis Methods for Dynamic Systems 127
2 b 3
s 1 0
" #
1 m
Gs 1 0 5 1
6 7
b k k
4
2
s s s m
m m m
b
s 1 "0#
m
Gs 1
b k
s2 s m
m m
And finally, we get the transfer function G(s):
1
m 1
Gs 2
b k ms bs k
s2 s
m m
It should not be a surprise to find that is exactly what was developed earlier as
the transfer function (from the differential equations) for the mass-spring-damper
system. The formal approach is seldom needed in practice; as long as you understand
the process and what it represents, many computer programs are developed to
handle these chores for you. Many calculators produced now will perform these
tasks.
3.6.2 Eigenvalues
One important result from the state space to transfer function conversion is realizing
that the characteristic equation remains the same and recognizing where it came
from during the transformation. Remember that when we took the determinant of
the (sI A) matrix, it resulted in a polynomial in the s-domain. Looking at it closer,
we see that we actually developed the characteristic equation for the mass-spring-
damper system. This determinant is sometimes called the characteristic polynomial
whose roots are defined as the eigenvalues of the system. Thus, eigenvalues are
identical to the poles in the s-plane arising from the roots of the characteristic equation.
We can treat the eigenvalues the same as our system poles and plot them in the
s-plane, examine the response characteristics (time constant, natural frequency, and
damping ratio), and predict system behavior.
Eigenvectors are sometimes calculated by substituting the eigenvalues, , back
into the matrix equation (I A x 0 and solving for the relationships between the
states that satisfy the matrix equation. This is more common in the field of vibrations
were we discuss modes of vibration corresponding to eigenvectors.
Slope 1 s1 dx=dtjxk ; tk
Slope 2 s2 dx=dtjxk s1 h=2; tk h=2
Slope 3 s3 dx=dtjxk s2 h=2; tk h=2
Slope 4 s4 dx=dtjxk s3 h; tk h
Named after German mathematicians C. Runge (18561927) and W. Kutta (18671944).
Analysis Methods for Dynamic Systems 129
senting the output at fixed time steps even though the step size changed during the
numerical integration.
In conclusion, it is quite simple to compute a time solution to equations in state
space format even though they may be nonlinear and with multiple inputs and out-
puts. There are books containing the numerical recipes (code segments) for many
different numerical integration algorithms (in many different programming lan-
guages) if the desire is to do the programming for incorporation into a custom
program.
sents a single input single output system. This simplifies the process, especially when
the numerator of the transfer function is constant (no s terms). The following exam-
ple illustrates the ease of constructing the state space matrices from a transfer func-
tion. Remember that converting a state space system to a transfer function results in
a unique transfer function, but the process in reverse may produce many correct but
different representations in state space. That is, equivalent but different state space
matrices, when converted to transfer functions, also result in the same transfer
function. The opposite is not true. Different methods of converting a transfer func-
tion into state space matrices may result in different matrices. One thing does remain
true, however. If we calculate the eigenvalues for any of the A matrices, they will be
identical. In the example that follows, we first develop the matrices manually and
then use Matlab. Even though the resulting matrices differ, we will verify that they
indeed contain the same information.
EXAMPLE 3.17
Convert the following transfer function to state space representation:
Cs 24
3 2
Rs s 6s 11s 6
The process is to first cross multiply:
Cs s3 6s2 11s 6 24 Rs
Cs s3 Cs 6s2 Cs 11s Cs 6 24 Rs
Take the inverse Laplace to get the original differential equation (minus the initial
conditions):
c000 6 c00 11c0 6 c 24 r
Choose the state variables (chosen here as successive derivatives; third order, three
states):
x1 c
x2 c0
x3 c00
Then the state equations are:
Analysis Methods for Dynamic Systems 131
dx1 =dt x2
dx2 =dt x3
dx3 =dt 6x1 11x2 6x3 24 r
Finally, writing them in matrix form:
2 3 2 32 3 2 3 2 3
x_ 1 0 1 0 x1 0 x1
4 x_ 2 5 4 0 0 1 54 x2 5 4 0 5r y 1 0 0 4 x2 5 0u
x_ 3 6 11 6 x3 24 x3
To conclude this example, let us now work the same problem using Matlab and
compare the results, learning some Matlab commands as we progress. We first define
the numerator and denominator (num and den in the following program) where the
vectors contain the coefficients of the polynomials in decreasing powers of s. Thus to
define the polynomial (s3 6s2 11s 6 in Matlab we define a vector as [1 6 11 6],
which are the coefficients of [s3 s2 s1 s0 :
Matlab commands:
num24; %Define numerator of C(s)/R(s)=G(s).
den 1 6 11 6; %Define denominator of G(s).
sys1 tf num; den %Make and display LTI TF
A; B; C; D tf 2ssnum; den %Convert TF to SS using num,den
lti_ss sssys1 %Convert LTI to state space.
rootsden %Check roots of characteristic equation
eigA %Check eigenvalues of A
eiglti_ss %Check eigenvalues of lti_ss variable
After executing the commands, we find that the resulting state space system
matrices are slightly different. Checking the roots of the denominator (characteristic
equation of the original transfer function), and the eigenvalues of the two resulting A
matrices, gives the results as summarized as would appear on the screen in Table 4).
Even with different matrices the eigenvalues are the same and equal to the original
poles of the system. Matlab uses the LTI notation for commands used with LTI
systems. The transfer function command, tf, is used to convert the numerator and
denominator into an LTI variable. For very large systems and systems with zeros in
Cs 24
Original transfer function: , Poles (roots of CE) 3; 2; 1
Rs s3 6s2 11s 6
Matrices from using tf 2ss command: Matrices from using ss command:
2 3 2 3 2 3 2 3
6 11 6 1 6 2:75 0:375 1
A4 1 0 0 5B 4 0 5 A4 4 0 0 5B 4 0 5
0 1 0 0 0 4 0 0
C 0 0 24 D 0 C 0 0 1:5 D 0
Eigenvalues 3; 2; 1 Eigenvalues 3; 2; 1
132 Chapter 3
the numerator of the transfer function, using tools like Matlab can save the designer
much time.
3.1 PROBLEMS
3.1 Given the following differential equation, which represents the model of a
physical system, determine the time constant of the system, the equation for the
time response of the system when subjected to a unit step input, and the correspond-
ing plot of the system response resulting from the unit step input.
@x
80 4 x f t
@t
3.2 Given the second-order time response to a step input in Figure 27, measure and
calculate the percent overshoot, settling time, and rise time.
3.3 Using the differential equation given, determine the transfer function where
Gs Ys=Us.
d 3y d 2y dy
3
3 2
4 9y 10u
dt dt dt
3.4 Given the following differential equation, find the transfer function where y is
the output and u is the input:
:
y 5y 32 5u_ u
3.5 Using the differential equation of motion given,
3.13 Given Figure 30, the system response to a unit step input, approximate the
transfer function based on a second-order system model.
3.14 Given the following closed loop transfer function, plot the pole locations in the
s-plane and briefly describe the type of response (dynamic characteristics, final
steady-state value) when the input is a unit step.
18
Gs
s2 4s 36
3.15 For the block diagram shown in Figure 31, determine the following:
a. The closed loop transfer function
b. The characteristic equation
c. The location of the roots in the s-plane
d. The time responses of the system to unit step and unit ramp inputs
3.16 Given the physical system model in Figure 32, develop the appropriate differ-
ential equation describing the motion (see Problem 2.18). Develop the transfer func-
tion for the system where xo is the output and xi is the input.
3.17 Given the physical system model Figure 33, develop the appropriate differen-
tial equation describing the motion (see Problem 2.19). Develop the transfer function
for the system where xo is the output and xi is the input.
3.18 Given the physical system model Figure 34, develop the appropriate differen-
tial equation describing the motion (see Problem 2.20). Develop the transfer function
for the system where r is the input and y is the output.
3.19 Given the physical system model in Figure 35, develop the appropriate differ-
ential equation describing the motion (see Problem 2.21). Develop the transfer func-
tion for the system where F is the input and y is the output.
3.20 Given the physical system model in Figure 36, develop the appropriate differ-
ential equation describing the motion (see Problem 2.26). Develop the transfer func-
tion for the system where Vi is the input and Vc, the voltage across the capacitor, is
the output.
3.21 Using the physical system model in Figure 37, develop the differential equa-
tions describing the motion of the mass, yt as function of the input, rt. PL is the
load pressure, a and b are linkage segment lengths (see Problem 2.27). Develop the
transfer function for the system where r is the input and y is the output.
3.22 Determine the differential equations describing the system in Figure 38 (see
Problem 2.29). Formulate as time derivatives of h1 and h2 . Develop the transfer
function for the system where qi is the input and h2 is the output.
3.23 Determine the differential equations describing the system given in Figure 39
(see Problem 2.30). Formulate as time derivatives of h1 and h2 . Develop the transfer
function for the system where qi is the input and h2 is the output.
3.24 For the transfer function given, develop the Bode plot for both magnitude (dB)
and phase as a function of frequency.
10s 1
GHs
s0:1s 1
3.25 For the transfer function given, develop the Bode plot for both magnitude (dB)
and phase as a function of frequency.
Ys 2
2
Us s 3s 2
3.26 For the Bode plot shown in Figure 40, estimate the following:
a. What is the approximate order of the system?
b. Damping ratio: underdamped or overdamped?
c. Natural frequency (units)?
d. Approximate bandwidth (units)?
3.27 From the transfer function, sketch the basic Bode plot and measure the follow-
ing parameters:
a. Gain margin
b. Phase margin
c. Bandwidth using 3dB criteria
d. Steady-state gain on the system
5s 4
Gs
ss 1s 25s 1
3.28 Develop a Nyquist plot from the Bode plot given in Figure 40.
3.29 Given the following state space matrices, determine the equivalent transfer
function. Is the system stable and show why or why not?
x_ 1 2 5 x1 1 x1
u and y 1 0
x_ 2 3 11 x2 0 x2
3.30 Given the following state space system matrix, find the eigenvalues and
describe the system response:
0 1
A
1 1
3.31 Given the following transfer function, write the equivalent system in state space
representation.
C 2s2 8s 6
s 3
R s 8s2 16s 6
4
Analog Control System Performance
4.1 OBJECTIVES
Define feedback system performance characteristics.
Develop steady-state and transient analysis tools.
Define feedback system stability.
Develop tools in the s, time, and frequency domains for determining system
stability.
4.2 INTRODUCTION
Although a relatively new field, available controller strategies have grown to the
point where it is hard to define the basic configurations. The advent of the low
cost microcontroller has revolutionized what is possible in control algorithms. This
section defines some basic properties relevant to all control systems and serves as a
backdrop for measuring and predicting performance in later sections. Control sys-
tems are generally evaluated with respect to three basic areas: disturbance rejection,
steady-state errors, and transient response. Open loop and closed loop systems both
are subjected to the same basic criterion. As the complexity increases, additional
characteristics become important. For example, in advanced control algorithms
using a plant model for command feedforward, the sensitivity of the controller to
modeling errors and plant changes is critical. In this case it is appropriate to evaluate
different algorithms based on parameter sensitivity, in addition to the basic ideas
presented in this chapter.
A second major concern in controller design is stability. Three basic methods,
Routh-Hurwitz criterion, root locus plots, and frequency response plots, are devel-
oped in this chapter as tools for evaluating the stability of different controllers.
Stability is also closely related to transient response performance as the examples
and techniques illustrate.
141
142 Chapter 4
Cheap (i.e., timers vs. transducers, etc.) Requires additional components ($$)
Unable to respond to external inputs Reduces effects of disturbances
Generally stable in all conditions Can go unstable under certain conditions
No control over steady-state errors Can eliminate steady-state errors
Although in principle open loop controllers are cheaper to design and build
than closed loop controllers, this is not always the case. As microcontrollers, trans-
ducers, and amplifiers become more economical, often a break-even point exists
beyond which the open loop controller is no longer the cheaper alternative. For
example, some things can now be done electronically, therefore removing the most
expensive hardware in the system and simulating it in software.
Rs Command GC s Controller
Cs Output G1 s Amplifier
Ds Disturbance G2 s Physical System
Hs Transducer
Analog Control System Performance 145
Cs Gc G1 G2
Rs 1 Gc G1 G2 H
Since this transfer function represents the output over the input, the goal is to have
C=R equal to 1. If we could make this to always be the case, then the error would
always be zero. That is, Cs would always be equal to Rs. If it cannot be made to
always be 1, then how can we optimize it? By making the gain (product) of Gc G1 G2
as large as possible, we make the overall value get closer to 1. As this gain
approaches infinity, the ratio of C=R approaches 1, or perfect tracking. Although
this looks good when approached from the point of view of reducing steady-state
errors, we will see that adding criteria to our stability and transient effects will limit
the possible gain in our system. This is one of the fundamental aspects of most design
work, the balancing of several variables to optimize the overall design.
Now let us follow the same procedure but set the command to zero to find the
effects of disturbance inputs on our system. Setting Rs 0 results in the block
diagram shown in Figure 4. This results in the following closed loop transfer func-
tion:
Cs G2
Ds 1 Gc G1 G2 H
For this transfer function we would like C=D to equal zero, in which case the
disturbance input would have no effect on the output. Obviously, this will not be the
case unless the system transfer function equals zero, which cannot happen if we want
to control the system. If G2 equals zero, the controller will also have no relationship
to the output. If we want to minimize the effects of disturbances, then we can make
G2 as small as possible relative to Gc and G1 . Increasing the gain Gc and G1 while
leaving G2 does make the overall gain tend toward zero, as desired. Increasing H also
helps here but hurts with respect to command following performance.
To optimize both, we should try to make Gc and G1 as large as possible.
Although this sounds easy to do even with a simple proportional controller, K,
for Gc , we will see that a trade-off exists. As the gain of Gc is increased, the errors
decrease but the stability is eroded. Hence, good controller design is a trade-off
between errors and stability. Over the years many alternative controllers have
been developed to optimize this relationship between steady-state and dynamic per-
formance. The discussion here assumes primarily a proportional type controller.
This section thus far has presented general techniques for reducing errors with-
out being specific to steady-state errors. If we actually achieved C=R equal to 1, then
in theory (with feasible inputs) the output would always exactly equal the input and
steady-state errors would be nonexistent. If only this could always be the case. Real
world components never are completely linear with unlimited output and bandwidth
and thus ensure that all decent controls engineers remain in demand.
Now let us turn our attention in specific to steady-state errors. The previous
discussion is a natural lead in to discussing steady-state errors since the beginning
procedure is the same. Once the overall system transfer function is found, the steady-
state error can be determined. Remember though, it is possible to have zero steady-
state error and still have large transient errors. From the block diagram we can
determine our overall system transfer function and then apply the final value theo-
rem (FVT) to solve for the steady-state error. The only wrinkle occurs when an input
different from a unit impulse, step, or ramp is used. If a unit step is used on a type 0
system, the steady value is simply the FVT result from the system transfer function.
The steady-state error is then 1 Css . This can best be illustrated in the following
example.
EXAMPLE 4.1
Using the block diagram in Figure 5, determine the steady-state error due to a
Cs 25s 1
2
Rs s 6s 30
With R(s) = 1/s, solve for C(s) and apply the FVT:
25s 1 1 25
Csteady state lim ct lim s Cs lim s
t!1 s!0 s!0 s2 6s 30 s 30
The steady-state error is the final input value minus the output value:
25 1
Steady-state error ess rss css 1
30 6
So even after all the transients decay, the final output of the system never reaches the
value of the command.
Steady-State Disturbance Error
To solve the error when there is a disturbance acting on the system, we set Rs 0
and solve for Cs=Ds: This results in the following transfer function:
Cs 5s 1
Ds s2 6s 30
With Ds 4=s, solve for Cs and apply the FVT:
5s 1 4 20
Csteady state lim ct lim s Cs lim s
t!1 s!0 s!0 s2 6s 30 s 30
In this case, the steady-state error is simply the final output value since the input
value (desired output) is set to zero:
20 2
Steady-state error ess rss css
30 3
After all the transients decay form the step disturbance input, the final output of the
system reaches 0.667, even though the command never changed. Ideally, css in this
case would be zero.
Several interesting points can be made from this example. First, when we
closed the loop relative to Rs and then to Ds, we found that the denominator
of the transfer function remained the same for both cases. Remembering that the
information regarding the stability and dynamic characteristics of the system are
contained in the characteristic equation, this is exactly what we would expect to find.
We still have the same physical system; we are just inserting signals into two different
points. If we had a second-order underdamped response from one input, we would
expect the same from the other. That is to say, we have not modified the physics of
our system by closing the loop at two different points. What caused the difference in
responses was the change in the numerator, which as we saw earlier when developing
transfer functions is precisely where the coefficients arising from taking the Laplace
transform of the input function appear in the transfer function.
The second interesting point found in this example is that the error never goes
all the way to zero, even as time goes to infinity. In fact, as we will see for controllers
with proportional gain only, this will almost always be the case. It can be explained
as follows: In order for the physical system in the example to have a non-zero output
(as requested by the command), it needs a non-zero input. If the input to the physical
system is zero, so will be the output. Since the output of the controller in this
example provides the input to the physical system, it must be non-zero also. With
a simple proportional gain for our controller, we can never have a non-zero output
with a zero input; thus, there must always be some error remaining to maintain a
148 Chapter 4
signal into the physical system. As the proportional gain is increased, the required
input (error) for the same output sent to the physical system is reduced. So as we
found, increasing the proportional gain decreases the steady-state error because it
can provide more output with a smaller error input. As the next section shows, we
can add integrators to our system to eliminate the steady-state error since an inte-
grator can have a non-zero output even when the input is zero.
It is often easy to determine what the steady-state errors will be by classifying
the system with respect to the number of integrators it contains. Remember that an
integrator block is 1=s, and thus the number of 1=s terms we can factor out of the
transfer function is the number of integrators the system has. We saw that the
transfer function for the hydraulic cylinder had one integrator and thus was classi-
fied as a type 1 system. A type 0 system has no pure integrators and a type 2 system
has two integrators (1=s2 factors out) and so forth.
EXAMPLE 4.2
To illustrate how to determine the system type number, let us work an example using
the hydraulic servo system that we modeled in Chapter 2. The differential equation
governing the system motion is
d 2y
A dy AKx
m 2 b x
dt KP dt KP
First, take the Laplace transform
A AKx
m s2 K1 s Ys K2 Xs; K1 b; K2
Kp Kp
Write the transfer function
Ys K2 1 K2
Xs sms K1 s ms K1
Since a 1=s term can be factored out of the transfer function, it is classified as a
type 1 system. In this example the integrator is included as part of the physical
system model. Integrators are also added electronically (or mechanically) as part
of the control system. The I term in a common proportional, integral, derivative
(PID) controller represents the integrator that is added.
To illustrate the general case, close the loop on the type 1 system shown in
Figure 6. Then closing the loop produces
Gs
s Gs
Cs Rs Rs
1 Gs
s
s Gs
Applying the FVT to the system for a unit step input means that the input (1/s)
cancels with the s from the FVT. Therefore, if we let s go to zero in the above
transfer function, we end up with Gs=Gs 1: No matter what form the system
model Gs takes, the output is always one. Since the command is also 1, the error is
zero. As we will see with PID controllers, the integral term in the controller forces the
steady-state error to be zero for step input functions.
If we reference the block diagram in Figure 7, we can generalize the steady-
state error results for different inputs using Table 3. This makes the process of
evaluating the steady-state performance of the system quite easy. Knowing the sys-
tem type number, overall system gain, and the type of input allows us to immediately
calculate the amount of steady-state error in the system.
To demonstrate how the table is developed, let us look at two examples and
check the values given for the type 0 and type 1 systems.
EXAMPLE 4.3
To begin, we will use the type 0 system shown in Figure 8 and calculate the steady
state errors for the system. We will first apply the information given in Table 3 and
then verify it by closing the loop and calculating the steady-state error. To apply the
information in the table, we need to find the steady-state gain K for the system. That
is, if we put an input value of 1 into the first block, let all of the transients (s terms)
decay, what would be the output. For this system we have three blocks, and the
overall gain is given by multiplying the three:
K 8 3 2=4 12
From the table, and for a unit step input, the error is equal to 1=1 K, or 1/13.
And for a unit ramp and acceleration input, the error is equal to infinity. If we wish
to verify the table, simply close the loop and apply the FVT. Closing the loop results
in the following transfer function:
Cs 48
2
Rs s 5s 52
48 1 48
Csteady state lim ct lim s Cs lim s 2
t!1 s!0 s!0 s 5s 52 s 52
48 4 1
Steady-state error ess rss css 1
52 52 13
For a unit ramp input Rs 1=s2
48 1
Csteady state lim ct lim s Cs lim s 1
t!1 s!0 s!0 s2 5s 52 s2
So we see that the application of the table results in the correct error and can also be
verified by closing the loop and applying the FVT. In the case where we do not have
a system type number (or the table), it is always possible, and still quite fast, to just
close the loop and apply the FVT as demonstrated here.
EXAMPLE 4.4
To finish our discussion on steady-state errors, let us look at an example type 1
system and calculate the errors resulting from various inputs by using both the
system type number and corresponding table followed by closing the loop and apply-
ing the FVT. The block diagram representing our system is given in Figure 9. To
solve for the steady-state errors using Table 3, we first must calculate the steady-state
gain in the system. Looking at each block, the overall gain is calculated as
K 2 3 2=4 3
Since this is a type 1 system, with one integrator factored out of the third block, we
would expect the following steady-state errors from the different inputs:
For a unit step input, the error is equal to 0. For a unit ramp input, the error is equal
to 1=K, or 0.333. And for a unit acceleration input, the error is equal to infinity. To
verify these errors, lets close the loop and apply the FVT for the different inputs.
The closed loop transfer function becomes
Cs 12
Rs s3 5s2 4s 12
For a unit step input Rs 1=s:
12 1 12
Csteady state lim ct lim s Cs lim s 1
t!1 s!0 s!0 s3 5s2 4s 12 s 12
12
Steady-state error ess rss css 1 0
12
For a unit ramp input Rs 1=s2 , we take a slightly different approach since the
steady-state output of Cs, using the FVT, will go to infinity.
12 1 12
Csteady state lim ct lim s Cs lim s 3 2
2 1
t!1 s!0 s!0 s 5s 4s 12 s 0
This is not a surprise since the ramp input also goes to infinity. What we are inter-
ested in is the steady-state difference between the input and output as the both go to
infinity. The easiest way to handle this is to actually write the transfer function for
the error in the system and then apply the FVT. The error can be expressed as the
output of the summing junction where
Es Rs Cs or Cs Rs Es
Then if our open loop forward path transfer function is defined as
Cs GOL sEs, we can solve for Es=Rs:
Cs GOL sEs Rs Es
or
Es 1 ss 1s 4 s 1s 4
s
Rs 1 GOL s ss 1s 4 12 ss 1s 4 12
To find the error as a function of our input, we apply the FVT to
Es 1 GOL s Rs, except that as compared to before, an extra s term is in
the numerator:
s 1s 4 1 4 1
esteady state lim et lim s Es lim s2
t!1 s!0 s!0 ss 1s 4 12 s2 12 3
The error 0:333 is the same as that which was calculated earlier using the table. If
we used the same application of the FVT for the acceleration input, the input func-
tion would add one more s term in the denominator (1=s3 ) and as s approaches zero
in the limit, the output approaches infinity.
So we see that the application of the table results in the correct determination
of steady-state error for all three inputs. Once again, closing the loop and applying
the FVT verified each table entry.
If controllers you are designing do not fit a standard mold, either reduce the
block diagram and calculate the steady-state errors or perform a computer simula-
tion long enough to let all transients decay. Using the FVT is generally the easiest
152 Chapter 4
method for quickly determining the steady-state performance of any controller block
diagram model.
4.3.3.1 System Type Number from Bode Plots
A similar analysis, using the system type number to determine the steady state error
as a function of several inputs, can be done using Bode plots. If we recall back to
when we developed Bode plots from transfer functions, any time we had an inte-
grator in the system it added a low frequency asymptote with a slope of 20 dB/
decade (dec) and a constant phase angle of 908degrees. From this information it is
a straightforward process to determine the system type number and facilitate the use
of Table 3. For example, if we see that the low-frequency asymptote on our Bode
plot has a slope of 40 dB/dec and an initial phase angle of 180 degrees, then we
know that we have two integrators in our system and therefore a type 2 system. Now
it is a simple matter of using the table as was demonstrated after obtaining the type
number from the transfer function.
First-order systems
Time constant or pole location on the Settling time or rise time
real axis
Second-order systems
Ratio of the imaginary to the real components Damping ratio
of complex roots
Radial distance from origin in s-plane Natural frequency
Damping ratio (defined above) Percent overshoot
Natural frequency and damping ratio Rise time
Damped natural frequency or imaginary Peak time
component of complex roots
Natural frequency and damping ratio or real Settling time
component of complex roots
controllers, that is, the ability to make the system behave in a variety of ways simply
by changing an electrical potentiometer representing controller gain. A given system
might be underdamped, critically damped, overdamped, or even unstable based on
the chosen value of one gain. Therefore, our controller design becomes the means by
which we move the system poles to the locations in the s-plane specified by applying
the criteria defined in Table 4. The techniques presented in the next section will allow
us to design controllers to meet certain transient response characteristics, even
though the physical system contains actual values of damping and natural frequency
different from the desired values. Both root locus techniques and Bode plot methods
are effective in designing closed loop control systems.
Another benefit of closed loop controllers is that since we can electronically (or
mechanically) control response characteristics, we can add damping to the system
without increasing the actual power dissipation and energy losses. By adding a
velocity sensor and feedback loop to a typical position control system, it is easy to
control the damping and certainly is beneficial since less heat is generated by the
physical system.
once set in motion, will continue to grow in amplitude either until enough compo-
nents saturate or something breaks. Finally, relative stability is term you hear a lot
when discussing control system performance. It means different things to different
people. We should think of it as being the measure of how close to being unstable we
are willing to be. As we approach the unstable region, the overshoot and oscillatory
motions increase until at some point they become unacceptable. Hence a computer
controlled machining process might be designed to be always overdamped since no
overshoot is allowed while machining. This limits the response time and is not the
answer for everyone. A cruise control system, for example, might be allowed 5%
overshoot (several mph at highway speeds) to reach the desired speed faster. So we
see that different situations call for different definitions of relative stability, which is
often defined by the allowable performance specifications imposed upon the system.
The root locus and Bode plot methods are powerful tools since they allow us to
quickly estimate things like overshoot, settling time, and steady-state errors while we
are designing our system. These are the topics of discussion in the next several
sections.
Step 2: Examine the coefficients, ai s , if any one is negative or if one is missing the
system is unstable, and at least one root is already in the right hand of the s plane.
Step 3: Arrange the coefficients in rows beginning with sn and ending with s0 . The
columns taper to a single value for the s0 row. Arrange the table as follows:
The bi s through the end are calculated using values from the previous two rows
using the patterns below:
Analog Control System Performance 155
a1 a2 a0 a3 a1 a4 a0 a5 a1 a6 a0 a7
b1 b2 b3
a1 a1 a1
b1 a3 a1 b2 b1 a5 a 1 b3 b1 a7 a1 b4
c1 c2 c3
b1 b1 b1
c 1 b2 b1 c 2 c 1 b3 b1 c 3
d1 d2
c1 c1
This pattern can be extended until the nth row is reached and all coefficients are
determined. The last two rows will only have one column (one coefficient) and the
third from last row with two coefficients, etc. If an element turns out to be zero, a
variable representing a number close to zero, i.e., e, can be used until the process is
completed. This indicates the presence of a pair of imaginary roots and some part of
the system is marginally stable.
Step 4: To determine stability, examine the first column of the coefficients. If any sign
change ( to or to +) occurs, it indicates the occurrence of an unstable root.
The number of sign changes corresponds to the number of unstable roots that are in
the characteristic equation.
EXAMPLE 4.5
To illustrate the process using the block diagram in Figure 10, close the loop to find
the overall system transfer function and use the Routh-Hurwitz method to determine
stability. When we close the loop we get C=R KG=1 KG, which leads to the
characteristic equation of
1 K=ss 1s 3 0
or
s3 4s2 3s K 0
Then the Routh-Hurwitz table becomes
s3 1 3 0 (all original even numbered coefficients)
s2 4 K (all original odd numbered coefficients)
s1 3 K=4
s0 K
The system will become unstable when a sign change occurs. When K=4
becomes greater than 3, the sign will change. Thus K < 12 is the allowable range
of gain before the system becomes unstable. Also, if K becomes less than zero the
sign also changes, although this is not a normal gain in a typical controller (it
becomes positive feedback). But, we could say the allowable range of K where the
system is stable is
0 < K < 12
Several quick comments are in order regarding this example. Based on what we have
already learned, we could see that the open loop poles from the block diagram are all
stable (all fall in the left-hand plane (LHP)). This example then illustrates how
closing a control feedback loop can cause a system to go unstable (range of K for
stability). Also, this is a type 1 system with one integrator. Thus it will have zero
steady-state errors from a step input and errors equal to 1=K for ramp inputs.
What the Routh-Hurwitz criterion does not give us insight into is the type of
response at various gains and how the system approaches the unstable region as
the gain K is increased. It is this ability (among others) that has made the root locus
techniques presented in the next section so popular. Most courses, textbooks, and
computer simulation packages include this technique among their repertoire of
design tools for control systems.
system, derivative controller gain, etc.), then we must rearrange the transfer function
to make this variable the gain K, or multiplier on the system. If we cannot, then we
are unable to use the rules developed here for easy plotting of the root loci. The concepts
are still the same, and the pole migrations are still plotted, but another method must be
available to solve for the poles every time the parameter is changed. With the variety of
software packages available today, this is not that difficult a problem.
To develop the guidelines for constructing root locus plots, we need to find the
roots of the characteristic equation. If we close the loop of a typical block diagram
with a feedback path, we get the following overall system transfer function and, of
interest, the characteristic equation:
Cs Gc G1 G2
Rs 1 Gc G1 G2 H
If we let Gc be a proportional gain K and G1 s and G2 s be combined and repre-
sented by one system transfer function Gs, then the characteristic equation can be
represented by
CE 1 KGH
The roots of the characteristic equation are determined by setting it equal to zero
where
1 K Gs Hs 0
or
K Gs Hs 1
These equations provide the foundation for root locus plotting techniques. Since the
product Gs Hs contains both a numerator and denominator represented by a
polynomial in s, it can be written in terms of the roots of the numerator, called
zeros, and the roots of the denominator, called poles, as follows:
s z1 s z2 s zm
K 1
s p1 s p2 s p3 s pn
Thus using this notation we have m zeros and n poles (from the subscript notation).
Since this equation equals 1 and contains complex variables, we can write this as
two conditions that always must be met in order for the product GsHs to equal
1. These two conditions are called the angle condition and the magnitude condi-
tion.
Angle condition: s z1 s z2 s zm s p1
s p2 s pn
odd multiple of 180 degrees (negative sign portion)
s z1 s z2 s zm
Magnitude condition: K 1
s p1 s p2 s p3 s pn
The angle condition is only responsible for the shape of the plot and the
magnitude condition for the location along the plot line. Therefore, the whole
158 Chapter 4
root locus plot can be drawn using the angle condition. The only time we use the
magnitude condition is to locate are position along the plot. For any physical system,
n m, and this simplifies the rules used to construct root locus plots. The basic rules
for developing root locus plots are given below in Table 5. For consistency, poles are
plotted using xs and zeros are plotted using os. This simplifies the labeling process.
The rules are based on the angle and magnitude conditions, as will be explained
through the use of several examples.
Step 1: Provides the groundwork for developing the root locus plot. We develop the
open loop transfer function by examining our block diagram and connecting the
transfer functions around the complete loop. The resulting open loop transfer func-
tion needs to be factored to find the roots of the denominator (poles) and numerator
(zeros). For systems larger than second-order, there are many calculators and com-
puters capable of finding the roots for us.
1 From the open loop transfer function, GsHs, factor the numerator and
denominator to locate the zeros and poles in the system.
2 Locate the n poles on the s-plane using xs. Each loci path begins at a pole, hence
the number of paths are equal to the number of poles, n.
3 Locate the m zeros on the s-plane using os. Each loci path will end at a zero, if
available; the extra paths are asymptotes and head toward infinity. The number of
asymptotes therefore equals n m.
4 To meet the angle condition, the asymptotes will have these angles from the positive
real axis:
If one asymptote, the angle 180 degrees
Two asymptotes, angles 90 and 270 degrees
Three asymptotes, angles 60 and 180 degrees
Four asymptotes, angles 45 and 135 degrees.
5 All asymptotes intersect the real axis at the same point. The point, s, is found by
(sum of the poles) (sum of the zeros)
number of asymptotes
6 The loci paths include all portions of the real axis that are to the left of an odd
number of poles and zeros (complex conjugates cancel each other).
7 When two loci approach a common point on the real axis, they split away from or
join the axis at an angle of 90 degrees. The break-away/break-in points are found
by solving the characteristic equation for K, taking the derivative w.r.t. s, and
setting dK=ds 0. The roots of dK=ds 0 occurring on valid sections of the real
axis are the break points.
8 Departure angles from complex poles or arrival angles to complex zeros can be
found by applying the angle condition to a test point in the vicinity of the root.
9 The point at which the loci cross the imaginary axis and thus go unstable can be
found using the Routh-Hurwitz stability criterion or by setting s j! and solving
for K (can be a lot of math).
10 The system gain K can be found by picking the pole locations on the loci path that
correspond to the desired transient response and applying the magnitude condition
to solve for K. When K 0, the poles start at the open loop poles; as K ! 1, the
poles approach available zeros or asymptotes
.
Analog Control System Performance 159
Step 2: Now that we have our poles (and zeros) from step 1, draw our s-plane axes
and plot the pole locations using xs as the symbols. When gain K 0 in our system,
these pole locations are the beginning of each loci path. If two poles are identical
(i.e., repeated roots), then two paths will begin at their location. Each pole is the
beginning point for one root loci path.
Step 3: This is same procedure as followed in step 2, except that now we locate each
zero in the s-plane using os as symbols. Each zero location is an attractor of root loci
paths, and as K ! 1, every zero location will have a loci path approach its location
in the s-plane. The remaining steps help us determine how the root loci paths travel
from the poles to the zeros (or asymptotes if there are more poles than zeros).
Step 4: It is easy to see from steps 2 and 3 that if we have more poles than zeros, then
some root loci paths do not have a zero to travel to. In this case (which is actually the
most common case), we will have some paths leaving the s-plane as the gain K is
increased. Fortunately, because of the angle condition, these paths are defined based
on the number of asymptotes that we have in our plot. The angles that the asymp-
totes make with the positive real axis are given in Table 6.
Step 5: All asymptotes intersect the real axis at a common point. This intersection
point can be found from the location of our poles and zeros. Once we know the
location, coupled with the angles calculated in step 4, we can plot the asymptote lines
on the s-plane. Remember that the root loci paths approach the asymptotes as K
approaches infinity; they do not necessarily travel directly to the intersection point
and lay on the lines themselves. The intersection point, s, is found by summing the
poles and zeros and dividing by the number of asymptotes, n m:
P
n P
m
pi zi
i1 i1
nm
It should be noted that only the real portions of complex conjugate roots need to be
included in the summations since the complex portions are always opposite of each
other and cancel when the pair of complex conjugate roots are summed.
Step 6: Now we are finally ready to start plotting the root locus paths in the s-plane.
We begin by including all sections of the real axis that fall to the left of an odd
number of poles and zeros. Each one of these sections meets the angle condition and
Number of asymptotes,
nm Angles with the positive real axis
1 y 180 degrees
2 y 90 and 270 degrees
3 ys 60 and 180 degrees
4 ys 45 and 135 degrees
180 degrees 2k 1
General number, k Angles yi ; k 0; 1; 2; . . .
nm
160 Chapter 4
is a valid segment of the root locus paths. This rule is easy to apply. Locate the right-
most pole or zero and, working our way to the left, mark every other section of real
axis that falls between a pole or zero, beginning with the first segment, since it is to
the left of an odd number (1).
Step 7: If a valid section of the real axis falls between two poles, then the paths must
necessarily break away from real axis and travel to either zeros or asymptotes. If a
valid section of the real axis falls between two zeros, then the paths must join the axis
between these two points and travel to the zeros as K approaches infinity. The points
where the root locus paths leave or join the real axis are termed break-away and
break-in points. Both paths, whether leaving or joining, do so at the same point, and
at an angle of 90 degrees.
Since any path not on the real axis involves imaginary portions, which always
occur in complex conjugate pairs, root locus plots are symmetrical around the real
axis. If we look at the asymptote angles derived in step 4, we see that they are always
mirrored with respect to the real axis.
To solve for the break-away and/or break-in points, we solve the characteristic
equation for the gain K, take the derivative with respect to s and set it zero, and solve
for the roots of the resulting equation. Valid points will fall on those sections of the
real axis containing the root locus paths.
dK
0; Solve for the roots
ds
By using this technique we are finding the rate of change of K with respect to the rate
of change of s. Break-away points occur at local maximums and break-in points at
local minimums (found when we set the derivative to zero and solve for the roots).
Step 8: To determine the direction with which the paths will leave complex conjugate
poles from, or at which the paths will arrive at complex conjugate zeros, we can
apply the angle condition to the pole or zero in question. We will use the fact that the
angle condition is satisfied whenever the angles add up to an odd multiple of 180
degrees. That is, if we rotate a vector lying on the positive real axis either 180 or
180 degrees, it now lies on the negative real axis, giving us the 1 relationship
[KGsHs 1]. It is also true if we rotate the vector 540 or 900 degrees, 1 12
times around or 2 12 times around, respectively.
Any test point, s, that falls on the root locus path will have to meet the angle
condition. If we place the test point very close to the pole or zero in question, then
the angle that the test point is relative to pole or zero and that meets the angle
condition is the departure or arrival angle. Graphically this may be shown as in
Figure 11. In the s-plane, we have poles at 2 2j and 1 and a zero at 3. If the
pole in question (regarding the angle of departure from it) is at 2 2j, then the sum
of all the angles from all other poles and zeros, and from the test point near the pole,
must be an odd multiple of 180 degrees. This can be expressed as
X
m n
fi 1802k 1; k 0; 1; 2; . . .
i1
Since zeros are in the numerator, they contribute positively to the summation and
poles, and since they are in the denominator, contribute negatively. The way Figure
Analog Control System Performance 161
11 is labeled, f1 is the angle of departure relative to the real axis. To show how these
would sum, let us calculate f1 :
f1 f2 f3 f4 180 degrees 2k 1
f1 90 degrees 180 degrees tan1 0:5 tan1 0:5
180 degrees 2k 1
f1 143 degrees 180 degrees
value of), then we can also use the methods from this step to find the gain K where
the system goes unstable, as step 9 did. Remember that our magnitude condition was
stated earlier as
s z1 s z2 s zm
K 1
s p1 s p2 s p 3 s pn
Graphically, the magnitude condition means that if we calculate the distances from
each pole and zero to the desired point on the root locus path, then K is chosen such
that the total quantity of the factor becomes 1. For example, the distance js z1 j, as
shown in the magnitude condition, is the distance between the zero at z1 and the
desired location (pole) along our path.
Analytically, we may also solve for K by setting the s in each term equal to the
desired pole location and multiplying each term out. The magnitude can then be
calculated by taking the square root of the sum of the real terms squared and the
imaginary terms squared.
For both methods, if there are no zeros in our system the numerator is equal to
1. This allows us to cross multiply and K is found using the simplified equation
below:
K js p1 jjs p2 jjs p3 j js pn j
We can apply the magnitude condition any time we wish to know the gain required
for any location on our root locus plots. This is the system gain; to calculate the
controller gain we need to take into account any additional gain that can be carried
out in front of GsHs when the transfer function is factored.
To illustrate the effectiveness of root locus plots, we will work several different
examples in the remainder of this section. The examples will begin with a simple
system and progress to slightly more complex systems as we learn the basics of
developing root locus plots using the steps above.
EXAMPLE 4.6
Develop the root locus plot for the simple first-order system represented by the open
loop transfer function below:
2
GsHs
s3
Step 1: The transfer function is already factored; there are no zeros and one pole.
The pole is at s 3.
Step 2: Locate the pole on the s-plane (using an x since it is a pole) as in Figure 12:
Step 3: There are no zeros.
Step 4: Since we have one pole, no zeros, then n m 1 0 1; and we have one
asymptote. For one asymptote, the angle with the positive real axis is 180 degrees
and the asymptote is the negative real axis, all the way out to 1.
Step 5: There is no intersection point with only one asymptote. It is on the real axis.
Step 6: With only one pole, all sections of the real axis to the left of the pole are valid.
For our first-order system this coincides with the asymptote.
Analog Control System Performance 163
Figure 12 Example: locating poles and zeros in the s-plane (first-order system).
When K equals 12, we end up with the closed loop transfer function as:
Cs 1
Rs s 4
This is exactly the same controller gain solved for using the root locus plot and
applying the magnitude condition when the desired pole was at s 4. For first- or
second-order systems the analytical solution is quite simple and can be used in place
of or to verify the root locus plot. For example, if we take the closed loop transfer
function above with K still a variable in the denominator, it is easy to see that when
K 0 the pole is at 3, our starting point, and as K is increased the pole move
farther and farther to the left. As K approaches infinity, so does the pole location.
Thus both our beginning point (the open loop poles) and our asymptotes are verified
as we increase the gain K. Finally, since the roots of a second-order system are also
easily solved for as K varies (quadratic equation), this same method can used (as will
be shown in the next example). Beyond second-order systems and the root locus,
techniques are much easier.
EXAMPLE 4.7
Develop the root locus plot for the second-order system represented by the block
diagram in Figure 15.
Step 1: The transfer function is already factored; there are no zeros and two poles.
The poles are at s 1 and s 3.
Step 2: Locate the poles on the s-plane (using xs) as shown in Figure 16:
Step 4: Since we have two poles, no zeros, then n m 2 0 2; and we have two
asymptotes. For two asymptotes, the angles relative to the positive real axis are 90
degrees and 270 degrees.
Step 5: The intersection point is found by taking the sum of the poles minus the sum
of the zeros, all divided by the number of asymptotes. For this example,
3 1 0
2
2
Step 6: With two poles, the section on the real axis between the two poles is the only
valid portion on the axis. For our example this is between 1 and 3. This section
also includes the intersection point of the asymptotes.
Step 7: There is one break-away point since the root locus paths begin on the real
axis and end along the asymptotes. To find the break-away point, solve the char-
acteristic equation for K and take the derivative with respect to s. The characteristic
equation is found by closing the loop and results in the following polynomial:
s2 4s 3 4K 0 or K s2 4s 3
Taking the derivative with respect to s:
dK
2s 4 0
ds
s 2
The break-away point for the second-order system coincides with the intersection
point for the asymptotes.
Step 8: The angles at which the locus paths leave the poles in our system are 180
degrees (pole at 1) and 0 degrees (pole at 3). This also coincides with the valid
Figure 16 Example: locating poles and zeros in the s-plane (second-order system).
166 Chapter 4
section of the real axis as determined earlier. These directions can also be ascertained
from the earlier steps since the only valid section of real axis is between the two poles
and the break-away point also falls in this section. We can now plot our final root
locus plot as shown in Figure 17.
Step 9: The asymptotes never cross the imaginary axis and the system cannot become
unstable, even when the loop is closed.
Step 10: For this example let us suppose that design goals are to minimize the rise
time while keeping the percent overshoot less then 5%. The overshoot specification
means that we need a damping ratio approximately 0.7 or greater. We will choose
0.707 since this corresponds to a radial line, making an angle of 45 degrees with the
negative real axis. Adding the line of constant damping ratio to the root locus plot
defines our desired pole locations where it crosses the root locus path. Our poles
should be placed at s 2 2j as shown in Figure 18.
Now we need to find the gain K required for placing the poles at this position.
Remember that the poles begin at 1 and 3 when K 0, they both travel toward
2 as K is increased, one breaks up, one breaks down, and they follow the asymp-
totes as K continues to increase. To solve for K, we apply the magnitude condition.
Since we do not have any zeros in this system, the numerator is equal to one and we
find K to be
pffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffi
K js p1 jjs p2 j j 12 22 jj 12 22 j 5
Figure 18 Example: using root locus plot to locate desired response (second order).
Analog Control System Performance 167
s2 4s 3 4K 0
If we solve for the roots using the quadratic equation, we can leave K as a variable
and check the various locus paths by varying K and plotting the resulting roots to
the equation.
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
4 42 43 4K
s1;2
2 2
Let us check various points along our root locus paths by using several values of K.
So as in the previous example, we are able to analytically solve for the roots as a
function of K and verify our plot developed using the rules from this section. In fact,
it is quite easy to see from the quadratic equation that our poles start at our open
loop poles when K 0, progress to the break-away point when the square root term
becomes zero, and then progress along the asymptotes as K approaches infinity.
Once we leave the real axis, the real term always remains at 2 and increasing K
only increases the imaginary component, exactly as the root locus plot illustrated.
From here the remaining examples will only use the root locus techniques since
beyond second-order, no easy closed form solution exists for determining the
roots of the characteristic equation. (Note: it does exist for third-order polynomials,
but it is a multistep process.)
EXAMPLE 4.8
Develop the root locus plot for the block diagram in Figure 19. This model was
already used in Example 4.5 for the Routh-Hurwitz method. Our conditions then
arise from:
K
1
ss 1s 3
Figure 19 Example: block diagram for root locus plot (third order).
168 Chapter 4
Figure 20 Example: locating poles and zeros in the s-plane (third-order system).
Step 1: The transfer function is already factored; there are no zeros and three poles.
The poles are at s 0; s 1; and s 3.
Step 2: Locate the poles on the s-plane (using xs) as shown in Figure 20.
Step 4: Since we have three poles, no zeros, then n-m = 3 0 = 3; and we have three
asymptotes. For three asymptotes, the angles relative to the positive real axis are 60
degrees and 180 degrees.
Step 5: The intersection point is found by taking the sum of the poles minus the sum
of the zeros, all divided by the number of asymptotes. This example is given in Figure
21.
0 3 1 0 4
3 3
Step 6: With three poles, the root locus sections on the real axis lie between the two
poles at 0, 1, and to the left of the pole at 3. In this example the asymptotes
intersection point does not fall in one of the valid regions.
Step 7: There is one break-away point since the root locus paths begin on the real
axis between 0 and 1 and end along the asymptotes. To find the break-away point,
solve the characteristic equation for K and take the derivative with respect to s. The
characteristic equation is found by closing the loop and results in the following
polynomial:
s3 4s2 3s K 0 or K s3 4s2 3s
dK
3s2 8s 3 0
ds
s 0:45; s 2:22
Only one break-away point coincides with the valid section of real axis; the root
locus paths will leave the real axis at 0.45 and start approaching the asymptotes.
Step 8: The angles at which the locus paths leave the poles in our system are clear by
looking at the valid sections of real axis and knowing that leave the poles along these
sections. We can now plot our final root locus plot as shown in Figure 22.
Step 9: Knowing that the root loci paths follow the asymptotes as K increases means
that any time we have three or more asymptotes, the system is capable of becoming
unstable since at least several of the asymptotes head toward the right-hand plane
(RHP). To find where the asymptotes cross the imaginary axis for this example, we
can close the loop and apply the Routh-Hurwitz stability criterion; this was done for
this same system in Example 4.5. By visually examining the root locus plot, we could
also apply the magnitude condition using our desired pole locations on the ima-
ginary axis to solve for gain K where the system becomes unstable.
Step 10: From here we have several options. If we want to tune the system to have
the fastest possible response, we could choose the gain K where the two paths
between 0 and 1 just begin to leave the real axis. These types of techniques will
be covered in more detail in the next chapter when we discuss designing control
systems.
EXAMPLE 4.9
Develop the root locus plot for the block diagram in Figure 23.
Figure 23 Example: block diagram for root locus plot (fourth-order system).
Step 1: There is one zero and four poles. When the polynomial in the denominator is
factored, we find the pole locations to be at s 0, s 2, and s 1 1j. The next
two steps are to place the poles and zero in the s-plane.
Step 2: Locate the poles on the s-plane (using xs), shown in Figure 24.
Step 3: There is one zero at s 3, shown in Figure 24.
Step 4: Since we have four poles, one zero, then n m 4 1 3; and we have
three asymptotes. For three asymptotes, the angles relative to the positive real axis
are 60 degrees and 180 degrees.
Step 5: The intersection point is found by taking the sum of the poles minus the sum
of the zeros, all divided by the number of asymptotes. This example is shown in
Figure 25.
0 2 1 1 3 1
3 3
Step 6: With four poles and a zero, the root locus sections on the real axis lie between
the two poles on the axis at 0, 2, and to the left of the zero at 3. In this example the
asymptotes intersection point does fall in one of valid regions.
Step 7: There is one break-away point since the root locus paths begin on the real
axis between 0 and 2 and end along the asymptotes. For this example there is also a
break-in point since the zero lies on real axis and must have one path approach it as
K goes to infinity. To the left of the zero is also part of the root locus plot (an
asymptote), and two paths must come together at this break-in point.
To find the points, solve the characteristic equation for K and take the deri-
vative with respect to s. The characteristic equation is found by closing the loop and
results in the following polynomial:
Figure 24 Example: locating poles and zeros in the s-plane (fourth-order system).
Analog Control System Performance 171
Figure 25 Example: location of asymptotes for fourth-order system with one zero.
s4 4s3 6s2 4s Ks 3K 0
s4 4s3 6s2 4s
K
s 3
Taking the derivative with respect to s (intermediate math steps required),
f1 63:4 degrees
Therefore, the angle that the path leaves the pole from is 63:4 degrees relative to
the real axis. We can use this information to see that the paths leaving the complex
poles head directly toward the 60 degree asymptotes. This leaves the break-away
point on the real axis to wrap back around and rejoin the axis at s 3:65. After
joining, one path progresses to the zero at 3 and the other path follows the
asymptote to infinity. We can now plot our final root locus plot as shown in
Figure 27.
Step 9: The system is capable of becoming unstable since at least several of the
asymptotes head toward the RHP (any time we have 3 asymptotes). To find
where the asymptotes cross the imaginary axis for this example, we use the char-
acteristic equation developed for step 7 and apply the Routh-Hurwitz stability cri-
terion. Usually, adequate resolution can be achieved by approximating where the
paths cross the axis and applying the magnitude condition using our desired pole
locations on the imaginary axis to solve for gain K where the system becomes
unstable. Since the Routh-Hurwitz method has already been demonstrated several
times, let us assume that our paths cross the imaginary axis at s 1j (from the
plot) and use the magnitude condition where jKGs Hsj 1.
s z1
K
s p1 s p2 s p3 s p4
p
2 p
3 12
10
K pppp K pp j1j
2 2 2 2
0 1 1 2 1 0 2 1 2 2 2 2 5 5
K 1:58
Since there is no gain that can be factored out of GsHs, this represents the
approximate gain where the controller can be set to before the system goes unstable.
Figure 27 Example: root locus plot for fourth-order system with one zero.
Analog Control System Performance 173
As a note regarding this procedure, we can include the open loop transfer function
gain in the magnitude equation during the process, in which case the K we solve for
will always be the desired proportional controller gain.
Step 10: From here we have several options. If we want to tune the system to have
the fastest possible response, we could choose the gain K where all poles are as far
left as possible. If one pole is close to the origin, the total response will still be slower.
No matter what gain we choose, this system will experience overshoot and oscillation
in response to a step input. At certain gains all four poles will be oscillatory,
although with the furthermost left pair decaying more quickly than the pair
approaching the imaginary axis. As we progress to the next chapter, it is these
types of decisions regarding the design of our controller that we wish to study and
develop guidelines for.
EXAMPLE 4.10
The remaining example for this section will present the Matlab code required to
solve examples 4.6 4.9. Matlab is used to generate the equivalent root locus plots as
those developed manually in each example. The plots for each earlier example are
given in Figure 28
2
For Example 4.6, GsHs
s3
the frequency at which the phase angle is 180 degrees and measuring the distance
that the magnitude line is below 0 dB. The measurement of phase and gain margin is
shown in Figure 29.
If the magnitude plot is not below the 0 dB line when the system is at 180
degrees, the system is unstable. It can be thought of like this: If the system is 180
degrees out of phase, the previous cycle adds to the new one. This is similar to
pushing a child on a swing when each push is timed to add to the existing motion.
Thus, if the magnitude is above 0 dB at this point, each cycle adds to the previous
one and the oscillations begin to grow, making the system unstable. If the magnitude
ratio is less than 1, then even though you push the child still at 180 degrees out of
phase, the output amplitude does not grow larger. The phase margin complements
the gain margin but starts with a magnitude condition and checks the corresponding
phase angle. When the magnitude plot crosses the 0 dB line, the phase margin is
calculated as the distance the phase angle is above 180 degrees. At the point of
marginal stability, these two points are one and the same (for most systems; see later
in this section).
Magnitude jdj=jajjbjjcj
and the phase angle is
Phase fd fa fb fc
Just from this simple example, we can see the relationships between the two
plots:
For any positive o, the pole at zero contributes a constant 90 degrees of
phase. Since a pole at zero is simply an integrator, this confirms how an
integrator adds in the frequency domain.
For each pole or zero not at the origin, the original angle of contribution
starts at zero and progresses to 90 degrees. As we saw in root locus plots,
poles add negatively and zero adds positively. As when developing Bode
plots, this is exactly the relationship attributed to first-order terms in the
numerator and denominator.
As o increases, so does the length of each vector connecting the poles and
zero to it, and the overall magnitude decreases. The more poles we have, the
quicker that the denominator grows and the slope of the high frequency
asymptote is larger. Once again, this confirms our experience with Bode
plots.
Finally, since we need to reach 180 degrees of phase angle before the
system can become unstable, we need to have at least three more poles
than zeros. With the difference only being two, the maximum angle only
approaches 180 degrees as o approaches infinity. This confirms what we
found when we looked at the root locus plots where for any system where n-
m 3, then asymptotes will cross into the right-hand plane. Similarly, the
only way for a Bode plot to show an unstable system is if the system order is
also three or greater.
To complete the picture, we must remember that a root locus plot is the result
of closing the loop and varying the gain K in the system; most Bode plots are found
using open loop input/output relationships, and these analogies from the s-plane to
the Bode plot were determined from the open loop poles. The question then becomes
how can closed loop system performance be determined from open loop Bode plots?
In most cases we can use the gain margin and phase margin defined above to
predict how the system would respond when the loop is closed. Gain and phase
Analog Control System Performance 177
margins are easy to measure, but the open loop system itself must be stable. As we
will see, this method does require some caution because some systems are not cor-
rectly diagnosed when using the gain margin to determine system stability. It is also
possible to use Nyquist and Nichols plots, and sometimes desirable, to determine the
closed loop system characteristics in frequency domain. Of course, we can also just
close the loop and construct another Bode plot to examine the closed loop response
characteristics. With much of the design work being done on computers, many
manual methods are finding less use.
4.4.3.3 Stability in the Frequency Domain
In this section we further examine the concept of system stability using the gain
margin and phase margin measurements as defined above. Recalling Figure 29,
where the margins are defined, we should recognize that if we were to increase the
gain K in the system that the magnitude plot changes vertically and the phase angle
plot does not change at all. Since we know that gain margin is the distance below 0 dB
when the system is 180 degrees out of phase, then increasing the gain K an amount of
the gain margin will bring us to the point of marginal stability (0 dB gain margin). For
systems with less than 2 orders of difference between the denominator and numerator,
the phase angle never is greater than 180 degrees and the gain margin cannot be
measured. Of course, remembering this case in root locus plots, there were two
asymptotes and the system never became unstable as the gain went to infinity. For
systems where the phase angle becomes greater than 180 degrees, we are able to
increase K to where the system becomes unstable. For a Bode plot using a gain K
equal to the gain margin, this marginal stability condition is shown in Figure 31.
Since multiplying factors add linearly on Bode plots, the system becomes mar-
ginally stable when the existing system gain is multiplied by another gain of 1.3. In
this example, both the gain margin and phase margin approach zero at the same time
and in the same place. When this happens, both the phase and gain margin are good
indicators of system stability. One problem that may occur is shown in Figure 32
where the phase angle is the only indicator accurately telling us that the system is
unstable. The gain margin, in error, predicts that the system is stable.
Figure 32 Differences between gain and phase margin with increasing phase.
Therefore, we see that although the gain margin indicates a stable system, the
phase margin demonstrates that in fact the system is unstable. Even though the gain
margin is often described as the increase in gain possible before the system becomes
unstable, the phase margin is a much more reliable indicator of system stability. For
most systems the two measures of stability correlate well and can be confirmed by
examining the Bode plot. If we recall back to the section on nonminimum phase-in
systems, we saw how delays in the system change the phase angle and not the
magnitude lines on Bode plots. Another way to consider the phase margin, then,
is as a measure of how tolerant the stability is to delays in the system.
Containing the same information but plotted differently are Nyquist plots. In
fact, the same gain and phase measurements are made. Since the Nyquist plot com-
bines the magnitude and phase relationships, it becomes very easy to see whether or
not a system is stable. As long as the system is open loop stable (no poles in the RHP
when K 0), the Nyquist stability theorem is easy to apply and use. If there are
possibly zeros or poles in the RHP, the mathematical proofs become much more
tedious and subject to many constraints. What follows here is a very precursory
and conceptual introduction to get us to point where we at least understand what a
Nyquist plot tells us regarding the stability of the close loop system using the open
loop frequency data. To begin with, let us revisit the s-plane as shown in Figure 33.
What we need to picture are the angles that the various poles and zeros will go
through as we move a test point s around the contour in the RHP. If we let the
contour begin at s 0, progress to jo 1, follow the semicircle (also with a radius
= 1) around to the negative imaginary axis, and back up to s 0, then we math-
ematically have included the entire RHP. If a pole or zero is in the RHP, the angle it
makes with the point s moving along the contour line will make a complete circle of
360 degrees. Any poles or zero not in the right-hand plane contribute a net angle
rotation of zero. Finally, let our mapping be our characteristic equation,
Analog Control System Performance 179
Fs 1 GsHs
Plot GsHs, the open loop transfer function, on the Nyquist plot, and the point of
interest relative to the roots of the our system occurs at the point 1:
GsHs 1
The important result is this: When we developed our Nyquist plot earlier, we
let o begin at zero and increase until it approached infinity (taking the data from the
Bode plot in Sec. 3.5.2). Thus, we have just completed one half of the contour path.
The path of o from negative infinity is just the reverse, or mirror, image of our
existing Nyquist plot. Now, if we look at the point 1 on the Nyquist plot and count
the number of times it circles the point, we can draw conclusions about the stability
of our closed loop system. The concept of using the mapping of the RHP and
checking for the number of encirclements about the 1 point is derived from the
theorem known as Cauchys principle of argument. If there are no poles or zeros in
the RHP, the 1 point will never have a circle around it (including the mirror image
of the common Nyquist plot). There are several potential problems. One problem
occurs because the angle contributions for poles and zeros are opposite and if one
pole and one zero are in the RHP, the angle will cancel each other during the
mapping. The difference is in the direction as the angle from a pole in the RHP
circles the 1 point in the counterclockwise direction and a zero in the clockwise
direction. A second problem is more mathematical in nature where if there are any
poles or zeros on the imaginary axis, the theorem reaches points of singularity at
these locations. The normal procedure is to make a small deviation around these
points. The Cauchy criterion can now be stated: the number of times that G(s)H(s)
encircles the point is equal to the number of zeros minus the number of poles in the
contour (picked to be the entire RHP). Encirclements are counted positive when they
are in the same direction as the contour path. This allows us to write the Nyquist
stability criterion as follows:
A system is stable if Z 0, where
Z N P
Adding the mirror image to the Nyquist plot developed earlier, shown in
Figure 34, allows us to now apply the theorem and check for system stability. A
quick inspection reveals that the closed loop system will be stable since the plot never
encircles the 1 point. Since it never circles the point CW or CCW there are neither
poles nor zeros in the RHP. Remember that in general the top half of the curve is not
shown and that including it may help you visualize the number of times the path
encircles 1.
To conclude this section, let us connect what we have learned with root locus
plots, Bode plots, and Nyquist plots to understand how the stability issues are
related. With the Bode plot we already defined and discussed the use of gain margin
and phase margin as measure of system stability. Moving to the Nyquist plot allows
the same measurements and comments to apply. If we consider the process of taking
a Bode plot and constructing a Nyquist plot, the gain and phase margin locations are
easily reasoned out. The radius of the Nyquist plot is the magnitude (no longer in
dB) and the angle from the origin is the phase shift. The gain margin then falls on the
negative real axis since the phase angle when the plot crosses it is 180 degrees. The
magnitude at this point cannot be greater than one if the system is stable, so the
distance that the plot is inside point 1 is the gain margin. This also confirms the
Nyquist stability theorem just developed since if the plot is to the right of 1
(positive gain margin) the path never encircles the 1 point and the theorem also
confirms that the system is stable. Where the theorem sees extended use is when
multiple loops are found and the gain margin is less clear.
The phase margin on the Nyquist plot will occur when the distance that the
plot is from the origin is equal to one. This corresponds to the crossover frequency (0
dB) in the Bode plot. The amount that the angle makes with the origin less than 180
degrees is the phase margin. These measurements are shown for a portion of a
Nyquist plot in Figure 35. Since the phase equals 180 degrees on the negative
real axis, the phase margin, f, is the angle between the line from the origin to the
point where the plot crosses inside the circle defined as having a radius of one and
the negative real axis. Remember that on the Bode plot this corresponds to the
frequency where the magnitude plot passes through 0 dB (the crossover frequency,
oc ).
The gain margin is represented linearly (not in dB) and can be found by the
ratio between lengths a and b where
K2 a b 1 1
K2 K1
K1 b b b
K
GMdB 20 log 2 20 logK2 20 logK1
K1
Since the axes are now linear, the increase in gain is simply a ratio of the
lengths between the point at 1 and where the line crosses the real axis. In other
words, if a gain of K1 gets the line to cross as shown in the figure (a distance b from
the origin), then K2 is the gain required (allowed) to get us a distance |a bj 1
from the origin before the system goes unstable. Since this data is plotted linearly,
the ratio of the gains is equal to ratio of the lengths. For example, if b 12 and the
current gain K1 on the system is 5, then K2 can be twice that of K1 , or equal to 10,
before the system becomes unstable and the line is to the left of the 1. To report the
gain margin with units of decibels, we can take the log of the ratio or of the differ-
ence between the logs of the two gains and multiply by 20.
To summarize issues regarding stability in the frequency domain, it is better to
rely on phase margin than gain margin as a measure of stability. In most systems
they will both provide equivalent measures and converge to the same point on both
the Bode and Nyquist plots when the system becomes marginally stable. Under some
conditions this is not true and the gain margin may indicate a stable system when in
fact the system is unstable. Gain margin is often thought of as the amount of possible
increase in gain before the system becomes unstable. This is easy to visualize on Bode
plots since only the vertical position of the magnitude plot is changed. Phase margin
is commonly related to the amount of time delay possible in the system before it
becomes unstable. Time delays change the phase and not the magnitude of the
system, and the system is classified as a nonminimum phase system. Finally, both
Bode plots and Nyquist plots contain the same information but in different layouts.
The concepts of stability margins apply equally to both. In addition, Nyquist plots
can be extended even further using the Nyquist stability theorem to determine if
182 Chapter 4
there are any poles or zeros in the right-hand plane. The next example seeks to
review the measures of stability used in the different system representations and
show that they all convey similar information, each with different strengths and
weaknesses.
EXAMPLE 4.11
For the system represented by the block diagram in Figure 36:
a. Develop the root locus, Bode, and Nyquist plots.
b. Determine the gain K where the system becomes unstable using
1. The root locus plot.
2. The gain margin from the Bode plot.
3. The gain ratio from the Nyquist plot.
c. Draw each plot again using the new gain.
Part A: To develop the root locus plot, we will follow the guidelines presented in the
previous section. The system has three poles (0; 2, and 4) and no zeros. Therefore
it will have three asymptotes with angles of 60 and 180 degrees. The asymptotes
intersect the real axis at s 2 and the break-away point is calculated to be at
s 0:845. This matches well with the valid sections of real axis that include the
segment between the poles at 0 and 2 and to the left of the pole at 4. This allows
the root locus plot to be drawn as shown in Figure 37. Since the system being
examined has three orders of difference between the denominator and numerator,
it will go unstable as K goes to infinity.
Plotting the different open loop factors found in the transfer function develops
the equivalent Bode plot. We have an integrator and two first-order factors, one with
t 0:5 seconds and one with t 0:25 seconds. This means that we have a low
frequency asymptote of 20 dB/dec, a break to 40 dB/dec at 2 rad/sec, and a
break to the high frequency asymptote of 60 dB/dec at 4 rad/sec. The phase
angle begins at 90 degrees and ends at 270 degrees. The resulting Bode plot
with the gain and phase margins labeled is shown in Figure 38.
Finally, we can develop the Nyquist plot from the data contained in the Bode
plot just developed. At very low frequencies, the denominator approaches zero and
the steady-state gain goes to infinity. The initial angle on the Nyquist plot begins at
90 degrees with a final angle of 270 degrees. The distance from the origin is equal
to 1 at the crossover frequency (M 0 dB), greater than 1 at lower frequencies, and
less than 1 at greater frequencies. The magnitude goes to zero as the frequency
approaches infinity. This can be represented as the Nyquist plot given in Figure 39.
Part B: With the three plots now completed, let us turn our attention to determining
from each plot where the system goes unstable. For the root locus plot the preferred
method is to apply the Routh-Hurwitz criterion and solve for the gain K where the
system crosses over in the RHP, thus becoming unstable. With the characteristic
closed loop equation equal to
CE s3 6s2 8s 8K
s3 1 8
s2 6 8K
s1 48 8K=6 0
s0 8K
When K is greater than or equal to 6, the third term in the first column becomes
negative and the system becomes unstable.
From the Bode plot, where the gain margin has been graphically determined as
15 dB, we see that if the magnitude plot is raised vertically by 15 dB the system
becomes unstable. For this system both the gain margin and phase margin go to zero
at the same point. We can find what increase in gain is allowable by solving for the
gain resulting in 15 dB of increase (gains multiply linearly but add on the Bode plot
due to the dB scale):
20 log K 15 dB
K 1015=20 5:6
Since the Bode plot uses approximate straight line asymptotes, the gain K varies
slightly from the gain solved for with the root locus plot and using the Routh-
Hurwitz criterion.
The calculation of the allowable gain determination using the Nyquist plot is
found by measuring the ratio of one over the length between the origin and where the
plot crosses the negative real axis. Using the gain approximated from the Bode plot
implies that the fraction should be about 1/5 of the total length between 0 and 1 on
the negative real axis.
Part C: To complete this example, let us redraw the plots after the total gain in the
system is multiplied by 6. Only the Bode plot and the Nyquist plot need to be
updated since the gain is already varied when creating the root locus plot and
contains the condition where the system goes unstable. In other words, we move
along the root locus paths by changing the gain K, while the Bode and Nyquist plots
will be different for any unique point along the path. In this part of the example, our
goal is to plot the Bode and Nyquist plot corresponding to the point where the root
locus plot crosses into the RHP. The root locus plot, included for comparison, is
shown in Figure 40 with the Bode and Nyquist plots at marginal stability conditions,
K 6.
To conclude this section, let us work the same example except that we will
answer the questions using Matlab to confirm and plot our results.
Analog Control System Performance 185
Figure 40 Example: Comparison of marginal stability plot conditions: (a) root locus plot;
(b) Nyquist plot; (c) Bode plot.
186 Chapter 4
EXAMPLE 4.12
For the system represented by the block diagram in Figure 41, use Matlab to solve
for the following:
Part A: To generate the plots using Matlab, we can define the system once and use
the command sequence shown to generate each plot. Each command used has many
more options associated with it. To see the various input/output options, type
>>help command j
and Matlab will show the comments associated for each command.
%Define System
num=8;
den=[1 6 8 0]
sys=tf(num,den)
The rlocus command returns the root locus plot shown in Figure 42.
Using the rlocfind command brings up the current root locus plot and allows us
to place the cursor on any point of interest along the root locus plot and find the
associated gain K at that point. Placing it where the paths cross into the RHP returns
K 6, verifying our analytical solution; it also returns the pole locations that you
clicked on.
The bode command generates the following Bode plot for our system, and
when followed by margin, will calculate and label the gain and phase margins for
the system, shown in Figure 43. Here we see that the gain margin equals 15.563 dB,
close to our approximation of 15 dB. The phase margin is 53.4 degrees at a frequency
of 0.8915 rad/sec. If we calculate the gain K required to shift the magnitude plot up
by 15.563 dB, we get
20 log K 15:563 dB
15:563
K 10 20 6
This gain of K 6 agrees with the root locus plot from earlier.
The nyquist command is used to generate our final plot, as shown in Figure 44.
To illustrate the condition of stability around the point 1, the axes have to be set to
zoom in on the area of interest. Remember that the plot begins with an infinite
magnitude at 90 degrees.
Applying the Nyquist stability criterion confirms that the system is stable.
There is no encirclement of the point at (0; 1). The gain margin is also verified
as the plot crosses the negative real axis approximately 1/6 of the way between 0 and
188 Chapter 4
Figure 43 Matlab Bode plot with stability margins (GM 15.563 dB [at 2.8284 rad/sec],
PM 53.411 deg. [at 0.8915 rad/sec]).
1 on the negative real axis. This means that we can increase the gain in our system
six times before the plot moves to the left of the point at 1.
Finally, if we increase the numerator of our system from 8 to 48 (increasing the
gain by a multiple of K 6), then we can use Matlab to redraw the Bode (see Figure
45) and Nyquist plots. When generating the Nyquist plot in Figure 46, we can show
one close-up section to verify stability and one overview plot giving the general
shape. With the new gain in the system we see that on the Bode plot the gain margin
and the phase margin both went to zero and the system is marginally stable. On the
Nyquist plot we see that the path goes directly through (0; 1), also confirming that
our system is marginally stable. Any further increase in gain and the plot will encircle
the point at 1 and tell us that we have an unstable system.
By now we have a better understanding about how different representations
can be used to determine system stability. Hopefully the comparisons have convinced
us that the same information is only conveyed using different representations. With
the same physical system (as in the examples) we certainly would expect each method
to find the same stability conditions. Different methods have different strengths and
weaknesses. Many times the choice of which one to use is determined by the infor-
mation available about the system and what form it is in. Different computer
packages also have different capabilities. As long as we understand how they relate,
we should be able to design using any one of the methods presented.
Figure 45 Example: Matlab Bode plot at marginal stability (GM 0 dB, PM 0 [unstable
closed loop]).
190 Chapter 4
4.4.3.4 Closed Loop Responses from Open Loop Data in the Frequency
Domain
As we discussed earlier, most frequency domain methods are done using the open
loop transfer function for the system. Ultimately, it is the goal that we close the loop
to modify and enhance the performance of the system. Since we have the information
already, albeit representing the open loop characteristics, we would like to directly
use this information and infer what the expected results are when we close the loop.
Of course we can always close the loop and redraw the plots for the closed loop
transfer function, but that duplicates some of the work already completed. This is
one advantage of the root locus plot in that the closed loop response is determined
from the open loop system transfer function and the complete range of possible
responses is quickly understood.
If we have a unity feedback system with Hs 1, then we can see the relation-
ship between the open loop and closed loop system response by using the Nyquist
diagram, as illustrated in Figure 47. If we have an open loop system represented by
Gs and unity feedback, then the closed loop system is given as
Cs Gs
Rs 1 Gs
Figure 47 Open loop versus closed loop response with Nyquist plot.
Analog Control System Performance 191
The denominator, where jGsj 1 0, can also be found on the Nyquist plot as the
distance from the point 0; 1 to the point on the plot. Now we know both the
numerator and denominator on the Nyquist plot and if we calculate various values
around the plot, we can construct our closed loop frequency response. In the same
way our closed loop phase angle can be found as
fCL fOL b
Instead of having to perform these calculations for each point, it is common to use a
Nichols chart where circles of constant magnitude and phase angle are plotted on the
graph paper. After we plot our open loop plot (as done for a Nyquist plot), we mark
each point were our plot crosses the constant magnitude and phase lines for the
closed loop. All that remains is to simply record each intersecting point and con-
struct the closed loop response.
Perhaps the most common parameters specified for open loop frequency plots
(Bode and Nyquist) are the gain and phase margins, as defined and used throughout
this section as a measure of stability. If we have dominant second-order poles, then
we can also use the gain and phase margins as indicators of closed loop transient
responses in the time domain. As we will see, the phase margin directly relates to the
closed loop system damping ratio for second-order systems given by the form
o2n
Gs
ss 2on
When we close the loop, we get the common form of our second-order transfer
function:
o2n
Gs
s2 2on s o2n
The process of relating our closed loop transfer function to the gain margin is as
follows. If we solve for the frequency where jGsj is equal to 1 by letting s jo, then
we have located the point where our phase angle is to be measured. We now sub-
stitute this frequency where the magnitude is one into the phase relationship and
solve for the phase margin as a function of the damping ratio. The result is
2
Phase margin f tan1 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffip
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2z 1 4z4 2
response in terms of the systems natural frequency and damping ratio. These para-
meters have been discussed and apply frequently in previous sections.
A word of caution is in order. Remember that these figures are for systems that
are well represented as second order (dominant second-order poles). For general
systems of higher orders and systems including zeros, our better alternatives are to
close the loop and redo our analysis, simulate the system on a computer, or develop a
transfer function and use a technique like root locus. If these conditions are not met,
the approximations become less certain and worthy of a more thorough analysis.
4.5 PROBLEMS
4.1 What are the three primary (fundamental) characteristics that are used to
evaluate the performance of control systems?
4.2 What external factors determine, in part, whether or not we should consider
using an open loop or closed loop controller?
4.3 Given the physical system model in Figure 50, answer the following questions
(see also Example 6.1).
a. Construct the block diagram representing the hydraulic control system.
b. Can the system ever go unstable?
c. To decrease the steady-state error, you should increase the magnitude of
which of the following variables? [ M, B, Kv, K, Ap, a, b ]
4.4 List a possible form that a disturbance that might take in each of following
components of a control system (i.e., electrical noise, parameter fluctuation, etc.).
a. Amplifier
b. Actuator
c. Physical system
d. Sensor/transducer
4.5 A system transfer function is given below. What is the final steady-state value
of the system output in response to a step input with a magnitude of 10?
5s2 1
Gs
23s3 11s 10
4.6 Using the system in Figure 51, what is the initial system output at the time a
unit step input is applied, the steady-state output value, and steady-state error?
4.7 A controller is added to a machine modeled as shown in the block diagram in
Figure 52.
a. Determine the transfer function between C and R. What is the steady-state
error from a unit step input at R, as a function of K?
b. Determine the transfer function between C and D. What is the steady-state
error from a unit step input at D, as a function of K?
4.8 The transfer function for a unity feedback Hs 1 control system is given
below. Determine
a. The open loop transfer function.
b. The steady-state error from a unit step input.
a. Develop the Routh array for the polynomial, leaving K as a variable in the
array and determine the range of K for which the system is stable.
b. At the maximum value of K, what are the values of the poles on the ima-
ginary axis and what is the type of response?
4.14 Given the block diagram in Figure 56, use root locus techniques to answer the
following questions.
a. Sketch the root locus plot.
b. Use the magnitude condition and your root locus plot to determine the
required gain K for a damping ratio of 0.866. Show your work.
c. Letting K 2 for this question, what is the steady-state error due to a unit
step input at R?
4.15 Develop the root locus plot and required parameters for the following open
loop transfer function.
s 4
GH
ss 3s2 2s 4
4.16 Develop the root locus and required parameters for the following system.
K
1 KGsHs 1
s4 12s3 64s2 128s
K
1 0
ss 4s 4 j4s 4 j4
a. List the results that are obtained from each root locus guideline.
b. Give a brief sentence describing why or why not the dominant poles assump-
tion is valid for this system.
4.17 Develop the root locus plot and required parameters for the following open
loop transfer function.
s2
GHs 2
s 3s 3 s2 8s 12
4.18 Develop the root locus plot and required parameters for the following open
loop transfer function.
s2 6s 8
GHs
s 7 s 4s 3 s2 2s 10
2
After the plot is completed, describe the range of system behavior as K is increased.
4.19 Develop the root locus plot and required parameters for the following open
loop transfer function. Use only those calculations that are required for obtaining
the approximate loci paths.
s 3s 4s 1:5 0:5js 1:5 0:5j
GH
s 2s 1s 0:5s 1
After the plot is completed, describe the range of system behavior as K is increased.
4.20 Given the following system, draw the asymptotic Bode plot (open loop) and
answer the following questions. Clearly show the final resulting plot.
1
GHs
s2 2s 1
a. What is the phase margin fm ?
b. What is the gain margin in decibels?
c. Is the system stable?
d. Sketch the Nyquist plot.
4.21 Using the Bode plot given in Figure 57, answer the following questions.
a. What is the open loop transfer function?
b. What is the phase margin?
c. Sketch the Nyquist plot.
Analog Control System Performance 197
5.1 OBJECTIVES
Provide overview of analog control system design.
Design and evaluate PID controllers.
Develop root locus methods for the design of analog control systems.
Develop frequency response methods for the design of analog control sys-
tems.
Design and evaluate phase lag and phase lead controllers.
Design proportional feedback controller using state space matrices.
5.2 INTRODUCTION
Analog controllers may take many forms, as this chapter shows. However, the
analysis and design procedures, once the transfer functions are obtained, are nearly
identical. A proportional controller might utilize a transducer, operational amplifier,
and amplifier/actuator and yet perform the same control action as a system utilizing
a set of mechanical feedback linkages. The movement has certainly been to fully
electronic controllers since they have several advantages over their mechanical coun-
terparts. Transducers are relatively cheap, computing power continues to experience
exponential growth, electrical controllers consume very little power, and the cost of
upgrading controller algorithms or changing parameter gains is only the cost of
design time that all updated systems would require regardless. Once the move is
made to digital, a new algorithm is installed by simply downloading it to the corre-
sponding microcontroller.
The algorithms presented here are the mainstay of many control projects today
and are capable of solving most of the problems they encounter. Advanced control
algorithms certainly have many advantages, but the basic controllers continue to be
the majority in most applications.
199
200 Chapter 5
components are not as capable of handling larger power levels (i.e., microproces-
sors).
Parallel compensation, since it occurs in the feedback path, can sometimes
require less components and amplifiers since the available power levels are often
larger. For example, as we see in the next chapter dealing with implementation of
control systems, mechanical controllers can operate without any electronics and
directly utilize the physical input from the operator to control the system. The
way these systems are implemented places the compensator (mechanical linkages
in some cases) in the feedback path.
Remember that although classified as two distinct configurations, more com-
plex systems with nested feedback loops may contain elements of both. The impor-
tant thing to note is that regardless of the layout of the system, the design tools and
procedures (i.e., closing the loop, placing system poles, etc.) remain the same. The
exception is that in some systems where the gain is not found in the characteristic
equation as a direct multiplier [1 KGsHs, as required for using root locus
design techniques, some extra manipulation may be required to use these tools.
Finally, in keeping with the layout of this text, both s-plane and frequency
methods are discussed together when discussing the design of various controllers. To
be effective designers, we should be comfortable with both and able see the relation-
ships that exist between the different representations. Ultimately, our compensator
should effectively modify our physical system response to give us the desired
response in the time domain, as in the time domain we see, hear, and work with
the system. Whether it is designed in using root locus or Bode plots, our design
criteria almost always are related back to a time response. As a last comment, it
would be wrong in learning the material in the chapter (and text) if we leave with the
impression that we always need to compensate our system to achieve our desired
response. Implementing a control system is not meant to replace the goal of design-
ing a good physical system with the desired characteristics inherent in the physical
system itself. Although many poorly designed physical systems are improved
through the use of feedback controls, even better performance can be achieved
when the physical system is properly designed. In other words, a poor design of
an airplane may result in an airplane that is physically unstable. Even though we can
probably stabilize the aircraft through the use of feedback controls (and time,
money, weight, etc.), it does not hide the fact that the original design is quite flawed
and should be the first improvement made. It is generally assumed when discussing
control systems that the physical system is constrained to be the way it is and our
task is to make it behave in a more desirable manner.
physics, with little change in parameters during operation, and that are fairly linear
over the operating range, the PID algorithm handles the job as capably as most
others. The different modes of PID (proportional, integral, derivative) give us
options to modify all the steady-state and transient characteristics examined in
Chapter 4.
The basic block diagram representing a PID controller is shown in Figure 3.
The output of the summing junction represents the total controller output that
provides the input to the physical system. Summing the three control actions results
in the following transfer function and time equivalent equations:
Us K det
Kp i Kd s ut Kp et Ki etdt Kd
Es s dt
the error in the integral term to zero. Many times it is possible to determine whether
the oscillations are from integral windup or excessive proportional gain by noticing
the frequency of oscillation. The proportional gain, when too large, causes the
system to oscillate near its natural frequency, while the integral windup frequency
is commonly much lower and less aggressive. The general tuning procedure is to
use the proportional gain to handle the large transient errors and the integral gain to
eliminate the steady-state errors with only minor effects on the stability.
Implementing the integral portion of a controller is common and generally proves
to be quite effective.
Derivative gains, of the three discussed, are capable of simultaneously helping
and hurting the most. A derivative control action can never be used alone since it
only has an output when the system is changing. Hence, a derivative controller has
no information about the absolute error in the system; you could be a mile from your
desired position and as long as you do not move, the derivative output is zero and
you remain a mile from where you want to be. Therefore, it must always be used in
conjunction with proportional controllers. The benefit is that a derivative gain, since
it adds a zero to the system, can be used to attract errant loci paths and thus
contribute to the stability of the system. It anticipates large errors before they happen
and attempts to correct the error before it happens, whereas proportional and inte-
gral gains are reactive and only respond after an error has developed. In many cases
it acts as effective damping in the system, simulating damping effects without the
energy losses associated with adding a damper.
This is the advantage; the disadvantage is how in practice it tends to saturate
actuators and amplify noisy signals. If the system experiences a step input, then by
definition the output of the derivative controller is infinite and will saturate all the
controllers currently available. Second, since the output is the derivative of the error,
or the slope of the error signal, the derivative output can have severely large swings
between positive and negative values and cause the system to experience chatter. The
net benefit is removed and the derivative term, although stabilizing the overall sys-
tem, creates enough signal noise amplification into the entire system that it chatters.
The effect is shown in Figure 6 and is why a low-pass filter is commonly used with
derivative controllers.
Even though the trend of the error signal always has a positive slope, the
derivative of the actual error signal has large positive and negative signal swings.
The low-pass filter should be chosen to allow the shape of the overall signal to
remain the same while the higher frequency noise is filtered and removed. This allows
the actual derivative output to more closely approach the desired.
Several variations of the PID controller are used to overcome the problems
with the derivative term. Even if we could remove all the noise from the signal, we
would still saturate the amplifier/actuator any time a step input occurs on the com-
mand. Since the physical system does not immediately respond, the error input to the
control also becomes a step function. The derivative term, in response to the step
input, attempts to inject an impulse in the system. When we switch to different set
points, this resulting impulse into the system is sometimes termed set-point-kick. If
our actual feedback signal is noise free, then we can counteract the step input
saturation problem using approximate derivative schemes to modify the derivative
term, Kd s, to become
s
Kd s Kd 1
N s1
where the value of N can be adjusted to control the effects. A common value is
N 10. This will make a step input not cause an infinite output but rather a decay-
ing pulse as shown in Figure 7, where a step response of the modified approximate
derivative term is plotted for several values of N. When Kd 1, as shown in Figure
7, the effects of N are easily seen since the peak value is simply equal to the value of
N for that plot. Notice though that the time scales are also shifted, and as N
increases the response decays more quickly (N represents the time constant). As
the value of N approaches infinity, the approximate derivate output approaches
that of a true derivative. The output of a true derivative would have an infinite
magnitude for an infinitesimal amount of time.
Other methods can be used to deal with both set-point-kick (step inputs) and
noise in the signals. These methods, however, some of which are summarized here,
require a change to the structure of our system and additional components. One
alternative to deal with the problem of differentiating a step input is to move the
derivative term to the feedback path. Since the output of the physical system will
never respond as abruptly as a step, the derivative term is less likely to saturate the
components in the system. In other words, the output of the system will not have a
slope equal to infinity as a true step function does. This modification, PI-D, and
resulting block diagram is shown in Figure 8.
Now what we see is that the error is fed forward so that it still directly multi-
plies by Kp , is integrated by Ki =s, and the derivative term only adds the effects from
the rate of change of the physical system output, not of the error signal. To even
further reduce abrupt changes in the signal that the controller sends to the system,
we might choose to also include the proportional term in feedback loop as shown in
Figure 9. The I-PD is similar to the PI-D except that now only the integral term
directly responds the change in error. Even if a step input is introduced into the
system, the integral of the step is a ramp and relatively easy for the system to respond
to without saturating. The proportional term is fed directly through from the feed-
back along with the derivative of the feedback.
The one problem we still have with these alternatives is noise in the feedback
signal itself. If we have a noisy signal, the derivative term will still amplify the
noise and inject it back into the system. An alternative to reduce both step input
and noise- related derivative problems is to use a velocity sensor. When velocity
sensors are used (assuming position is controlled) neither the error nor feedback
signal themselves are differentiated and the velocity signal acts as the derivative of
the position error, as shown in Figure 10. When the closed loop transfer function is
derived, the derivative gain, Kd , adds to the system damping and allows us to
stabilize the system. Since the feedback signal is from the velocity sensor and
not obtained by differentiating the position signal, the problems with noise ampli-
fication are minimized. There are many variations to this model depending on
components, access to signals, and system configurations. The remaining sections
help us design these controllers.
our plot to achieve our desired response. Also, if we recognize that we add two zeros
and only one pole, then our compensated system will have one less asymptote than
our open loop system. This will change our asymptote angles and the corresponding
loci paths. It is this ability to quickly and visually (graphically) see the effects of
adding the compensator that makes the root locus technique so powerful, not only
for PID compensators but also for most others.
Another method for tuning multiple gains is through the use of contour plots. If
we develop a root locus plot varying Kp for various values of integral and/or deri-
vative gains, we can map out the combination that suits our needs the best. This will
be seen in an example at the end of this section. This allows us to partially overcome
one limitation of the graphical rules for plotting the root loci where the rules require
that it is the gain on the system that is varied to develop the paths. Here we still vary
the gain, but we do so multiple times, changing either the integral or derivative gain
between the plots. When we are done, we get families of curves showing the effects of
each gain. We pick the curve that best approaches our desired locations for the poles
of the system.
Finally, at times it is possible to close the loop and analytically determine the
gains that will place the poles at their desired locations. By comparing coefficients of
the characteristic equations arising from closing the loop with gain variables and the
coefficients from the desired polynomial, we can determine the necessary gains. All
these methods are limited to the accuracy of the model being used, and at times the
best approach is to use the knowledge in a general sense as giving guidelines for
tuning through the typical trial and error approaches. Methods are presented in
later chapters for cases where this approach results in more unknowns than equa-
tions.
With the computing power not available to most users, methods exist to easily
plot the root locus plots as functions of any changing parameter in our system,
whether it be an electronic gain or the physical size or mass of a component in
the system. These methods are presented more fully in Chapter 11. In this section,
however, we limit our discussion to the design of PID controllers using root locus
techniques.
To illustrate the principles of designing PID controllers using root locus tech-
niques, the remainder of this section consists of examples that are worked for various
designs and design goals.
EXAMPLE 5.1
A machine tool is not allowed any overshoot or steady-state errors to a step input.
The system is represented by the open loop transfer function given below.
s6
Gs K
ss 4s 5
Part B. Develop the root locus plot. Summarizing the rules, we have three poles
at 0, 4, and 5 and one zero at 6. Therefore, we have two asymptotes with angles
of 90 degrees. The asymptote intersection point will occur at s 1:5. The valid
sections of real axis are between 0 and 4 and between 5 and 6. The break-away
point occurs at 1:85. Now we are ready to draw the root locus plot in Figure 12.
Part C. Solve the gain K where we have the minimum settling time without any
overshoot. The minimum settling time will occur when we have placed all roots as far
to the left as possible. Remember that since all individual responses add for linear
systems, the slowest response will also determine the settling time for the system. To
avoid any overshoot, we must keep all poles on the real axis. The point that meets
both conditions is the break-away point at 1.85. To find our controller gain for this
point, we apply the magnitude condition.
s z1 j6 1:85j 4:15
K K K j1j
s p1 s p2 s p 3
j1:85jj4 1:85jj5 1:85j 12:53
K3
Now we have achieved our design goals for the system using a proportional
controller. The steady-state error from a step input is zero because it is a type 1
system, and with a gain of 3 on the proportional controller all the system poles are
negative and real and the system should never experience overshoot (it does become
possible if errors exist in the model). The system settling time can be found by
knowing that in four time constants (of the slowest pole) that it is within 2% of
the final value. The time constant is the inverse of 1.85 and the settling time is then
calculated as 2.2 seconds.
If we wish to have our system settle more quickly, we realize that we will not be
able to achieve this using only a proportional controller. With a proportional con-
troller we can only choose points along the root locus plot, we cannot change the
shape of the plot. Later examples will illustrate how we can move the poles to more
desirable locations.
EXAMPLE 5.2
Using the system transfer function below, determine
a. The block diagram for a unity feedback control system;
b. The steady-state error from a step input as a function of Kp using a
proportional controller;
c. The root locus plot using a PI controller;
d. Descriptions of the response characteristics available with the PI control-
ler.
System transfer function:
4
Gs
s 1s 5
Part A. Block diagram for a unity feedback control system is given in Figure 13:
Part B. To find the error from a step input using only the proportional gain, we
set Ki to zero and find the closed loop transfer function:
Cs 4Kp
2
Rs s 6s k 4Kp
We apply the final value theorem (FVT) and let Rs 1=s to find the error as
et rt ct:
4Kp 5 4Kp 4Kp 5
Css and ess 1 css
5 4Kp 5 4Kp 5 4Kp 5 4Kp
So for example, if Kp 5, our steady-state error to a step input would be 0.2.
Part C. To demonstrate the effects of going from P to PI, let us draw both plots
on the same s-plane. With proportional only, the root locus plot is very simple where
it falls on the real axis between 1 and 5 with a break-away and asymptote
intersection point at 3. There are two asymptotes at 90 degrees and the system
never becomes unstable.
Moving to the PI controller, we see that we add a pole at zero but we also add a
zero and so we still will have two asymptotes at 90 degrees. The zero we can place
by how we choose our gains Kp and Ki . To illustrate the effects of different zero
locations, we will draw two plots corresponding to two different choices. Since our
overall goal is to eliminate our steady-state error and, if possible, increase the
dynamic response, we will compare both results with initial P controller. For both
cases we will keep the math simple by choosing zero locations that cancel out a pole.
This is not necessary and in some cases not desired (as when trying to cancel a pole in
the right-hand plane [RHP]).
Case 1: Let Ki =Kp 1, then the zero is 1 and cancels the pole at 1. This leaves us
with poles at 0 and 5. The valid section of real axis is between these poles. The
asymptote intersection point and the break-away point are then both at 2.5.
Case 2: Let Ki =Kp 5, then the zero is at 5 and cancels the pole at 5. This leaves us
with poles at 0 and 1.The valid section of real axis is between these poles. The
asymptote intersection point and the break-away point are both at 0.5.
To compare the effects, let us now plot all three cases on the s plane as shown in
Figure 14.
Part D. To summarize this example we will comment on both the steady-state
and transient response characteristics for each controller. With the proportional
control we had a type 0 system and when we closed the loop it verified the existence
of the error being proportional to the inverse of the gain. With the asymptotes at 3,
it had a time constant of 1/3 seconds and therefore a settling time of 4/3 seconds.
When we added an integral gain, the system became a type 1 system and the
steady-state errors due to a step input become zero. This is true regardless of where
we place the zero using the ratio of Ki =Kp . The transient responses, however, varied.
When we place the zero at 5 our asymptotes intersect at 12 and we have a slow
system with a settling time of 8 seconds (2% criterion). Moving the zero in to 1
places our asymptote at 2.5 and our settling time decreases back down to 1.6
seconds, much closer to the original proportional controller. In both cases, however,
adding the integral gain tended to destabilize the system, as we would expect when a
pole is placed at the origin. So we see that in fact the integral gain does drive our
steady-state error to zero for a step input but also hurts the transient response to
different degrees, depending on where we place the zero.
Finally, we must mention the effects of using a zero to cancel a pole. Although in
theory this is easy to do, as just shown, in practice this is nearly impossible. It relies on
having accurate models, linear systems, and no parameter changes. If we are just a
little off (say at 1.1 or 0.9 for the second case), our root locus plot is completely
different as shown in Figure 15. Even though the basic shape is the same, we now a
much slower pole that never gets to the left of the zero. If the zero is slightly to the left
of the pole at 1, we have much the same effect but with another break-away (and
break-in) point around the pole at 1. There still remains a pole much closer to the
origin. It is for these reasons that it is not generally wise to try and cancel out an
unstable pole with a zero but is better to use the zero to draw the pole back into the
left-hand plane (LHP). If we do design for pole-zero cancellation, we should always
check what the ramifications are if the zero does not exactly cancel the pole.
EXAMPLE 5.3
Using the block diagram below where the open loop system is unstable, design the
appropriate PD controller that stabilizes the system and provides less than 5%
overshoot in the system.
The block diagram for the PD unity feedback control system is given in Figure
16. With only a proportional controller there is no way to stabilize the open loop
system. We have an open loop pole in the RHP and the asymptote intersection point
and break-away points also fall in the RHP using a proportional controller. This
means that not only will one open loop pole be unstable, but as the gain is increased
the other pole also becomes unstable.
We will now add derivative compensation to the controller which allows us to
use a zero to pull the system back into the LHP. Once we add a zero, we decrease the
original number of asymptotes (2) by one and now have only one falling on the
negative real axis. If the zero is to the left of 1; the loci will break away from the
real axis between 5 and 1 and break in to the axis somewhere to the left of the zero,
thus giving us the response that we desire. For this example let us place the zero at
5 and draw the root locus plot. Solving the characteristic equation for the gain K
and taking the derivative with respect to s allows us to calculate the break-away and
break-in points as 1.35 and 11.35, respectively. This leads to the root locus plot in
Figure 17.
Since our only performance specification calls for less than 5% overshoot, we
can pick the break-in point for our desired pole locations. This gives us the fastest
settling without any overshoot. Any more or less gain moves at least one pole further
to the right. To solve for the gain required (knowing that Kp =Kd 5, we apply the
magnitude condition for our desired pole location at s 11:325. For this example
we will include the system gain of 5 in the magnitude condition, making the K that
we solve for equal to our desired proportional gain.
5 s z 1 j11:325 6j 5:325
K K K j1j
s p1 s p2 j11:325 1jj11:325 5j 168:556
Kp 31:6
and
Kd 6:3
So we have achieved the effect of stabilizing the system without any overshoot
by adding the derivative portion of the compensator. It is important to remember the
practical issues associated with implementing the derivative controller due to the
noise amplification problems. A good controller design on paper may actually
harm the system when implemented if the issues described earlier in this section
are not considered and addressed.
EXAMPLE 5.4
Using the system represented by the block diagram in Figure 18, find the required
gains with a PID controller to give the system a natural frequency of 10 rad/sec and
damping ratio of 0.7. First, let us see how the system currently responds with only a
proportional controller and what has to be changed. With a proportional controller
there are two asymptotes and the intersection point is at 4. The system gain can be
varied to produce anywhere from an overdamped to underdamped response, but the
root locus path does not go through the desired points.
The desired pole locations can be found directly from the performance speci-
fications since the natural frequency is the radius and damping ratio is the cosine of
214 Chapter 5
the angle (a line with an angle of 45 degrees from the negative real axis). Thus, the
desired locations are calculated at
Real component s 10 0:7 7
Imaginary component od 102 72 0:5 7j
Without the I and D gains the loci paths follow the asymptote at 4. To
illustrate an alternative method, we will first solve for the gains analytically and
verify them using root locus plot. To solve for the gains we first close the loop
and derive the characteristic equation
CE s3 8 10 Kd s2 15 10 Kp s 10Ki
To equate the coefficients we still need to place the third pole to write the desired
characteristic equation. Let us make it slightly faster than the complex poles and
place it 10. Our desired characteristic equation becomes
zeros, while the third path just leaves the pole and 5 and follows the asymptote to
the left. The resulting plot is drawn in Figure 20.
After developing the root locus plot, we see that finding the desired character-
istic equation and using the gains to make the coefficients match results in the poles
traveling through the desired locations. This method will not work for all systems
depending on the number of equations and unknowns. An alternative method using
contour lines is presented in a later example.
A good approach to take when we have multiple gains (or parameters that
change) is just placing the poles and zeros that we have control over in locations that
we know cause the desired changes. We know how the valid sections of real axis,
asymptotes, poles ending at zeros, etc., will all affect our plot and we can use our
knowledge when we design our controller.
Finally, since we added an integral compensator, we made the system go from
type 0 to type 1 and we will not have steady-state errors when the input function is a
step (or any function with a constant final value).
EXAMPLE 5.5
To illustrate the use of computer tools, we work the previous examples using Matlab
to generate the root locus plots and solve the desired gains, taking the system from
Example 5.1 as shown here in Figure 21.
To solve for the proportional gain where the system does not experience over-
shoot and has the fastest possible settling time, we will define the system in Matlab,
generate the root locus plot, and use rlocfind to locate the gain. The commands in m-
file format are as follows:
%Matlab commands to generate Root Locus Plot and find desired gain K
Executing these commands then produces the root locus plot in Figure 22 and allows
us to place the cross hairs at our desired pole location and click. After doing so we
will see back in the workspace area the gain K that moves us to that location.
After clicking on the break-away point we find that the gain must be K 3, or
exactly what we found earlier when solving this problem manually. From this point
we can easily modify the numerator and multiply it by 3, make a new transfer
function (linear time invariant [LTI] variable) and simulate the step or impulse
response of the system. In addition, using rltool we can interactively add poles
and zeros in both the forward and feedback loops and observe their effects in real
time. We can drag the poles or zeros around the s plane and watch the root locus
plots as we do so. As was mentioned earlier, most computer packages designed for
control systems have similar abilities.
EXAMPLE 5.6
In this example we again take a system from earlier and use Matlab to design a PI
controller to remove the steady-state error (from a step input) and then choose the
fastest possible settling time. The system block diagram with the controller added is
shown in Figure 23.
In Matlab we define three transfer functions, the plant, the controller with
Ki =Kp 1, and the controller with Ki =Kp 5.
%Matlab commands to generate Root Locus Plot and find desired gains Kp and Ki
sysp=tf([4],[1 6 5]); %Define the plant transfer function
z1=1; z2=5; %Define ratios of Ki/Kp = 1 and 5
syspi1=tf([1 z1],[1 0]); %Controller transfer function with Ki/Kp=1
syspi5=tf([1 z2],[1 0]); %Controller transfer function with Ki/Kp=5
subplot(1,2,1);
rlocus(syspi1*sysp) %Generate Root Locus Plot
subplot(1,2,2);
rlocus(syspi5*sysp) %Generate Root Locus Plot
When we execute these commands, it produces one plot window with two subplots
using the subplot command, shown in Figure 24. We see that the results correspond
well with the results from the earlier example.
When Ki =Kp 1 the system settles much faster since the poles are further to
the left. This relies on canceling a pole by using a zero, and if we are slightly off, large
or small, the results become as shown in Figure 25. Here we see that even if our zeros
are only slightly away from the pole that was intended to be canceled, the root locus
plot changes and a very different response is obtained. In general it is not considered
good practice to rely on mathematically canceling poles with zeros. Remember that
our system is only a linear approximation to begin with, let alone additional errors
that may by introduced which will cause the poles to shift.
To finish this example, let us close the loop using the original system with only
a proportional controller and Kp 2 followed with our PI controller where Kp 2
and Ki 2. We will use Matlab to generate a step response for both systems and
compare the steady-state errors. The responses are given in Figure 26.
As expected from previous discussions, the steady-state error went to zero
when we added the PI controller and the system became type 1. Also, as noted
from the root locus plots, adding a pole at the origin, as the integrator does, hurts
Figure 25 Example: Matlab root locus plots with PI controllers and errors.
Analog Control System Design 219
our transient response and the settling time is less. If overshoot is allowable, we
could also make the P controller more attractive by increasing the gain and having
slightly more overshoot.
EXAMPLE 5.7
Using the block diagram in Figure 27, use Matlab to design a PD controller that
stabilizes the system and limits the overshoot to less than 5%.
In this system we see that the plant transfer function is unstable due to the pole
in the RHP. To solve this using Matlab, we will generate the root locus plot using a
simple proportional controller (basic root locus plot) and using a PD controller to
stabilize the system. The commands used are as follows:
subplot(1,2,1);
rlocus(sysp) %Generate Root Locus Plot w/ P controller
subplot(1,2,2);
rlocus(syspd*sysp) %Generate Root Locus Plot w/ PD controller
k=rlocfind(syspd*sysp);
sys1=tf(5,[1 -4 -5]); % Compare the step responses using P and PD
sys2=tf(5*5*[1 5],[1 -4 -5]);
sys1cl=feedback(sys1,1) %Close the loop
sys2cl=feedback(sys2,1)
figure; %Create a new figure window
step(sys1cl); hold; %Create step response for P controller
step(sys2cl); %Create step response for PI controller
When we plot the step responses it is easy to see that the system controlled
only with a proportional gain quickly goes unstable, while the PD controller
stabilizes the system and minimizes the overshoot. The step responses are given
in Figure 29.
The interesting point that should be made is that the proportional gain chosen
for the PD controller was intended to be at the repeated roots intersection and yet
the step response clearly shows an overshoot of the final value. What we must
remember is that the second-order system is no longer a true second order-system
but includes a first-order term in the numerator (the zero from the controller). This
alters the response as shown in Figure 29 to where the system now does experience
slight overshoot.
The effect of adding the derivative compensator clearly demonstrates the added
stability it provides. The disadvantage, as discussed earlier, is that the system
becomes much more susceptible to noise amplification because of the derivative
portion.
EXAMPLE 5.8
Recalling that we solved for the required PID gains in Example 5.4, we now will use
Matlab to verify the root locus plot and also check the step response of the com-
pensated system. We tuned the system using analytical methods to have a damping
ratio equal to 0.7 and a natural frequency equal to 10 rad/sec. Using the gains from
earlier allows us to use the system block diagram as given in Figure 30 and simulate
it using Matlab.
To achieve the damping ratio and natural frequency, we know that the
poles should go through the points s 7 7j. Using the Matlab commands
given below, we can generate the root locus plot and the step response for the
compensated system. The sgrid command allows us to place lines of constant
damping ratio (0.7) and natural frequency (10 rad/sec) on the splane. This
command, in conjunction with rlocfind, allows us to find the gain at the desired
location.
syscl=feedback(syspid*sysp,1)
figure; %Create a new figure window
step(syscl); %Create step response for P controller
The gains solved for earlier provide the correct compensation, and the root
locus paths go directly through the intersection of the lines representing our desired
damping ratio and natural frequency. Once again, we can verify the results by using
Matlab to generate a step response of the compensated system as given in Figure 32.
As we see, the compensated system behaves as desired and the response quickly
settles to the desired value. In the next example we see how to generate contours
to see the results of not only varying the proportional gain (assumed to be varied in
generating the plot), but also the effects from varying additional gains. The result is a
family of plots called contours.
EXAMPLE 5.9
In this example we wish to design a P and PD controller for the system block
diagram shown in Figure 33 with a unity feedback loop and compare the results.
Let us now use Matlab to tune the system to have a damping ratio of 0.707 and
to solve for the gain that makes the system go unstable. The Matlab commands used
to generate the root locus plot in Figure 34 and to solve for the gains are given
below.
%Program commands to generate Root Locus Plot
% and find various gains, K
num=1; %Define numerator
den=conv([1 0],conv([1 1],[1 3])); %Define denominator from poles
sys1=tf(num,den) %Make LTI System variable
When we examine this plot, we see that we can achieve any damping ratio but that
the natural frequency is then defined once the damping ratio is chosen. Using the
rlocfind command we can find the gain where the system has a damping ratio equal
to 0.707 and where the system goes unstable.
The system becomes marginally stable at Kp 11 and has our desired damping
ratio when Kp 1. The corresponding natural frequency is equal to 0.6 rad/sec. If
we want to change the shape of the root locus paths, we need to have something
Figure 35 Example: block diagram for PD contours on root locus plot (Matlab).
Kp Kd s 1K Kp s
d
GH Kp
ss 1s 3 ss 1s 3
Remember that Matlab varies a general gain in front of Kp and thus GH must be
written as above. The contours are actually plotted as functions of Kd =Kp . To generate
the plot, we use the following commands:
%Program commands to generate Root Locus Contours
% and find various gains, Kp and Kd
den=conv([1 0],conv([1 1],[1 3]));
Kd=0; %First value of Kd
num=[Kd 1]; %Define numerator
%Define denominator from poles
sys2=tf(num,den) %Make LTI System variable
rlocus(sys2) %Generate Root Locus Plot
sgrid(0.707,2); %place lines of constant damping
%ratio (0.707) and w (2 rad/s)
hold;
Kd=0.3; num=[Kd 1]; sys2=tf(num,den);
rlocus(sys2)
Kd=1; num=[Kd 1]; sys2=tf(num,den);
rlocus(sys2)
Kd=10; num=[Kd 1]; sys2=tf(num,den);
rlocus(sys2)
hold; %Releases plot
The root locus plot developed in Matlab is given in Figure 36, where we see
that using the hold command allows us to choose different controller gains and plot
several plots on the same s-plane. Here we see the advantage of derivative control
and how we can use it to move the root locus to more desirable regions. Even using
the damping ratio criterion from the P controller we are able to achieve faster
responses by moving all the poles further to the left. If we are having problems
with stability and can control/filter the noise in your system, then it will be beneficial
to add derivative control.
226 Chapter 5
In working through this section and examples, we should see how the tools
developed in the first three chapters all become a key component in designing stable
control systems. The usefulness of our controller design is directly related to the
accuracy of our model; if our model is poor, so likely will be our result.
Understanding where we want to place the poles and zeros is also of critical impor-
tance. Since many computer packages will develop root locus plots, we need to know
how to interpret the results and make correct design decisions.
phase margin. To begin our discussion of designing PID controllers using frequency
domain techniques, let us first define the Bode plots of the individual controller
terms.
The proportional gain, as in root locus, does not allow us to change the shape
of the Bode plot or the phase angle, but rather allows us only to vary the height of
the magnitude plot. Thus, we can use the proportional gain to adjust the gain margin
and phase margin, but only as is possible by raising or lowering the magnitude plot.
Remember that when we adjust the gain margin and phase margin for our open loop
system, we are indirectly changing the closed loop response. The phase margin
relates the closed loop damping ratio and the crossover frequency to the closed
loop natural frequency. It is easy to determine the proportional controller gain K
that will cause the system to go unstable by measuring the gain margin. If K is
increased by the amount of the gain margin, the system becomes marginally stable.
Assuming the gain margin, GM, is measured in decibels, then K is found as
GMdB
K 10 20
The integral gain has the same effects in the frequency domain as in the s-domain
where we saw it eliminate the steady-state error (in many cases, not all) and also tend
to destabilize the system (moves the loci paths closer to the origin). In the frequency
domain it adds a constant slope of 20 dB/decade (dec) to the low frequency
asymptote, which gives our system infinite steady-state gain as the frequency
approaches zero. The side effect of this is stability where also see that the integral
term adds a constant 90 degrees of phase angle to the system. This tends to
destabilize the system by decreasing the phase margin in the system. Some of the
phase margin might be reclaimed if after adding the integral gain we can use a lower
proportional gain since the steady-state performance is determined more by the
integral gain than the proportional gain. In other words, if we had a large propor-
tional gain for the reason of decreasing the steady-state error (previously to adding
the integral gain), while allowing more overshoot because of the high proportional
gain, then after adding the integral gain to help our steady-state performance, we
may be able to reduce the proportional gain to help stabilize (reduce overshoot) our
overall system. This effect is easily seen in the frequency domain where we achieve
high steady gains in our system by adding the low frequency slope of 20 dB/dec and
at the same time lowering the overall system magnitude plot by reducing propor-
tional gain K in our system.
Finally, the same comparisons between the s-domain and frequency domain
hold true when discussing the derivative gain. In the s-domain we used the derivative
gain to attract root loci paths to more desirable locations by placing the zero from
the controller where we wanted it on the s plane. In the frequency domain we
stabilize the system by adding phase angle into our system. A pure derivative term
adds 90 degrees of phase angle into our system, thus increasing our phase margin
(and corresponding damping ratio).
The advantage of Bode plots is seen where with knowledge of controller
shapes we can properly pick the correct controller to shape our overall plot to
the desired performance levels. To design our controller, we determine the amount of
magnitude and phase lacking in the open loop system and pick our controller (type
and gains) to add the desired magnitude and phase components to the open loop
228 Chapter 5
plot. This is quite simple using the PI, PD, and PID Bode plots shown in Figures 37,
38, and 39, respectively. As is clear in the frequency domain contributions shown in
the figures, PID plots combine the features of PI and PD. Any time that we design a
controller, we must remember that we still need the instrumentation and components
capable of implementing the controller.
In summary, to design a P, PI, PD, or PID controller in the frequency domain,
simply draw the open loop Bode plot of our system and find out what needs to be
added to achieve our performance goals. Use the proportional gain to adjust the
height of the magnitude curve, the integral gain to give us infinite gain at steady-state
(o approaches 0, 20 dB/dec slope goes to 1), and the derivative gain to add
positive phase angle. Each controller factor can be added to the existing plot
using the same procedure as when developing the original plot.
EXAMPLE 5.10
Using the block diagram in Figure 40, design a proportional controller in the fre-
quency domain which provides a system damping ratio of approximately 0.45.
To find the equivalent open loop phase margin that will give as approximately
z 0:45 when the loop is closed, we will use the approximation that PM 100 z.
This results in us wanting to determine the gain K required to give us an open loop
phase margin of 45 degrees. To accomplish this we will draw the open loop uncom-
pensated Bode plot, determine the frequency where the phase angle is equal to 135
degrees (45 degrees above 180 degrees), and measure the corresponding magnitude
in dB. If we add the gain K required to make the magnitude plot cross the 0-dB line
at this frequency, then the phase margin becomes our desired 45 degrees (remember
that the phase plot is not affected by K). Our open loop Bode plot is given in Figure
41.
Examining Figure 41, we see that at 10 rad/sec the phase angle is equal to
135 degrees and the corresponding magnitude is equal to 20 dB. Therefore, to
make the phase margin equal to 45 degrees we need to raise the magnitude plot
20 dB and make the crossover frequency equal to 10 rad/sec. We can calculate our
required gain as
GMdB
K 10 20
K 10
When we add the proportional gain of 10 to the open loop uncompensated Bode
plot, the result is the compensated Bode plot in Figure 42.
Now we see that we have achieved the desired result, a phase margin equal to
45 degrees. This gives us an approximate damping ratio of 0.45 with the closed loop
system. As before with root locus plots, the proportional controller does not allow us
to change the shape, only the location. We must incorporate additional terms if we
wish to modify the shape of the Bode plot. Also, we will still experience steady-state
error from a step input. With the proportional gain we only have a type 0 system.
Since our low frequency asymptote is at 20 dB (compensated system), corresponding
to our gain of 10, we will have a steady-state error from a unit step input equal to
1=K 1 or 1=11.
For this example, let us close the loop manually and find the characteristic
equation to verify the results obtained in the frequency domain. When the loop is
closed, it results in the following characteristic equation:
CE 0:1 s2 1:1 s 1 K
If we let K 10, we can solve for the roots of the characteristic equation as
p
1:1 1:12 40:111
s1;2 5:5 9 j
20:1 20:1
The damping ratio is calculated as
9
z cosy cos tan1 0:5
5:5
So we see that even with the straight line approximation and the open loop to
closed loop approximation, the design methods in the frequency domain quickly
gave us a value close to our desired damping ratio. The advantages of the working
in the frequency domain are not so evident in this example where we knew the system
model, but much more when we are given Bode plots for our system and are able to
quickly design our controller directly from the existing plots. This bypasses several
intermediate steps and is also quite easy to do without complex analysis tools.
EXAMPLE 5.11
In this example we will further enhance the steady-state performance of the system
from the previous example (Figure 40) by adding a PI controller while maintaining
the same goal of a closed loop damping ratio equal to 0.45. The contributions of a PI
controller in the frequency domain were given in Figure 37 as:
K
1 Kpi
TFPI Ki
s
The contributions can be added as three distinct factors: a gain Ki , first-order lead
with a break frequency of Kp =Ki , and an integrator. If we choose Ki 10 and Kp
10; then we will add a low frequency asymptote of 20 dB/dec with no change in the
magnitude curve after 1 rad/sec. The phase angle is decreased an additional 90
degrees at 0.1 rad/sec, 45 degrees at 1 rad/sec (the break frequency of the numera-
tor), and no change after 10 rad/sec. Since we have raised the overall magnitude by a
factor of 10 (Ki ) and have not altered the phase angle at 10 rad/sec, the resulting
phase margin is identical to the previous example and equal to 45 degrees at a
crossover frequency of 10 rad/sec. If we show the additional factors on a Bode plot
and add them to the open loop uncompensated system, the result is the compensated
Bode plot given in Figure 43.
So by proceeding from a simple proportional compensator to a proportional
+ integral compensator, we have achieved the damping ratio of approximately 0.45
while eliminating our steady-state error from step inputs. There are different ways
this can be designed while still achieving the design goals since we have two gains and
one goal. If the crossover frequency was also specified, it would likely require dif-
ferent gains to optimize the design relative to meeting both goals.
232 Chapter 5
EXAMPLE 5.12
The final example demonstrating design techniques in the frequency domain will be
to stabilize an open loop marginally stable system using a PD controller. The open
loop system is Gs 1=s2 . A stabilizing controller is required (i.e., PD) since the
system is open loop marginally stable. The performance specifications for the open
loop compensated system are
Phase margin of approximately 45 degrees.
Crossover frequency of approximately 1 rad/sec.
Therefore, using our open loop to closed loop approximations, we would
expect the closed loop controlled system to have a damping ratio of 0.45 and a
natural frequency of 1.2 rad/sec (see Figure 49 of Chapter 4, at z 0:45,
oc =on 0:82). We start by drawing the open loop Bode plot as shown in Figure 44.
We have constant phase of 180 degrees, a crossover frequency of 1 rad/sec,
and a constant slope of 40 dB/dec. The system is marginally stable and needs a
controller to stabilize it. The PD controller, to meet the specifications, must keep the
crossover frequency at its current location while adding 45 degrees of phase to the
system. The PD controller is a first-order system in the numerator given as
Kd
Gc s Kp 1 s
Kp
Therefore, it will add 45 degrees at the break point, Kp =Kd . Since we want a phase
margin equal to 45 degrees, making Kp =Kd 1 meets the phase requirements. Now
Analog Control System Design 233
present, and the one covered here, is the Ziegler-Nichols method. The guidelines are
mathematically derived to result in a 25% overshoot to a step input and a one-fourth
wave decay rate, meaning each successive peak will be one-fourth the magnitude of
the previous one. This a good balance between quickly reaching the desired com-
mand and settling down. In some cases this may be slightly too aggressive and
slightly less proportional gain could be used. Two variations exist, the first based
from an open loop step response curve and the second from obtaining sustained
oscillations in the proportional control mode. The step response method works well
for type 0 systems with one dominant real pole, (i.e., no overshoot). The ultimate
cycle method is based on oscillations and must therefore have a set of complex
conjugate dominant roots for the system to oscillate. To facilitate the procedure,
the following notation is introduced for PID controllers:
1
Gc Kp 1 Td s
Ti s
Instead of an integral gain and derivative gain, an integral time and derivative time
are used. Since we can switch between the two notations quite easily, it is simply a
matter of personal preference. Ti and Td tend to be more common when using the
Ziegler-Nichols method since when measuring signal output on an oscilloscope they
correspond directly and simplify the tuning process.
In the first case, the goal is to obtain an open loop system response to a unit
step input. This should look something like Figure 46. Simply measure the delay time
and rise time as shown and use Table 1 to calculate the controller gains depending on
which controller you have chosen.
Analog Control System Design 235
If we examine the PID settings a little closer and substitute the tuning para-
meters into Gc in place of Kp , Ti , and Td , it results in a controller transfer function
defined as
2
s D1
Gc 0:6T
s
Here we see that this tuning method places the two zeros at 1=D on the real axis
along with the pole that is always placed at the origin (the integrator).
The second method, defining an ultimate cycle, Tu , is useful when the critical
gain can be found and oscillations sustained. To find the critical gain, use only the
proportional control action (turn off I and D) and increase the proportional gain
until the system begins to oscillate. Record the current gain and capture a segment of
time on an oscilloscope for analysis. The measurement of Tu can be made as shown
in Figure 47 to allow the gains to be calculated according to Table 2.
Once again, if we examine the PID settings more closely and substitute the
tuning parameters into Gc in place of Kp , Ti and Td , the controller transfer function
becomes
2
s T4U
Gc 0:075Ku Tu
s
Similar to before, we see that this tuning method places the two zeros on the real axis
at 4=TU along with the pole that is always placed at the origin.
It is also possible to use the Ziegler-Nichols method analytically by simulating
either the step response or determining the ultimate gain at marginal stability.
Although this might serve as a good starting point, more options can be explored
using the root locus and Bode plot techniques from the previous section. For exam-
Controller type Kp Ti Td
P T/D 1 0
PI 0.9 T/D D / 0.3 0
PID 1.2 T/D 2D 0.5 D
236 Chapter 5
ple, sometimes it is advantageous to place the two zeroes with imaginary components
to better control the loci beginning at the dominant close loop complex conjugate
poles. Nonetheless, this a common method used frequently on the job and it
provides at least a good starting point, if not a decent solution.
T2 s 1
Phase lag: =K
T1 s 1
T s1
Phase lead: K 1
T2 s 1
Controller type Kp Ti Td
P 0.5 Ku 1 0
PI 0.45 Ku Tu / 1.2 0
PID 0.6 Ku 0.5 Tu 0.125 Tu
Analog Control System Design 237
T1 > T2 . As demonstrated in the following sections, the lag and lead portions may be
designed separately and combined after both are completed.
mance criterion. In general, lag-lead and PID controllers are interchangeable, with
minor differences, advantages, and disadvantages.
EXAMPLE 5.13
Using the block diagram of the system represented in Figure 49, find the propor-
tional gain K that results in a closed loop damping ratio of 0.707. Design a phase-lag
controller that maintains the same system damping ratio and results in a steady-state
error less than 5%. Predict the steady-state errors that result from a unit step input
for both cases.
Step 1: We begin by drawing the root locus plot for the uncompensated system
and calculating the gain K required for our system damping ratio of 0.707. This
damping ratio corresponds to the point where the loci paths cross the line extending
radially from the origin at an angle of 45 degrees from the negative real axis. The
root locus plot for this system is straightforward and consists of the section of real
axis between 3 and 5 with the break-away point at 4. The asymptotes and the
paths both leave the real axis at angles of 90 degrees as shown in Figure 50.
So for our desired damping ratio we want to place our poles at s 4 4j. To
find the necessary proportional gain K we can apply the magnitude condition at one
of our desired poles. Our gain is calculated as
15 15 15
K K pp K pp j1j
s p1 s p2 2 2 2
1 4 1 4 2 17 17
K 1:13
Our total loop gain is 15 K or 17, and since we have a type 0 system our error from
a step input is 15=15 K 15 or 15/32. This is almost 50% and results in very poor
steady-state performance. To meet our steady-state error requirement, we need the
total gain greater than or equal to 19; this results in an error less than 1/20, or 5%.
Since the proportional gain already provides a gain of 1.13 as determined by our
transient response requirements, we need another gain factor equal to 20/1.13 (or
17.7). To exceed our requirement and demonstrate the effectiveness of phase-lag
compensation, we will set the gain contribution from our phase-lag factor equal to
20.
Step 2: Now that we know what gain is required for the phase-lag term, we can
proceed to place our pole and zero. The pole must be closer to the origin to increase
our gain. This results in a slight negative phase angle introduced into the system since
the angle from the controller pole is always slightly greater than the angle from the
zero (as it is always further to the left). We minimize this effect by keeping the pole
and zero close. For this example let us place the pole at 0.02 and the zero at 0.4; this
gives us our additional gain of 20 without significantly changing our original root
locus plot. Thus we can describe our phase-lag controller transfer function as
s 0:4 2:5s 1
Gc Phase lag 20
s 0:02 50s 1
Step 3: To verify our design, we can add the new pole and zero to the root locus
plot in Figure 50 and redraw it as given in Figure 51. So we see that the root locus
plot does not significantly change because the pole and zero added by the controller
almost cancel each other. If we wish to calculate the amount of phase angle that
was added to our system, we can approximate it by calculating the angle that the new
pole and zero each make with our desired operating point, s 4 4j.
Angle from pole : fp 180 tan1 4=3:98 134:86 degrees
EXAMPLE 5.14
Verify the system and controller designed in Example 5.13 using Matlab. Use Matlab
to generate the uncompensated and compensated root locus plots, determine the
gain K for a damping ratio of 0.707, and compare the unit step responses of the
uncompensated and compensated system to verify that the steady-state error require-
ment is met when the phase-lag controller is added.
From Example 5.13 recall that the open loop system transfer function and the
resulting phase-lag controller were given as
15
Gs Open loop system
s 3s 5
s 0:4 2:5s 1
Gc Phase lag 20
s 0:02 50s 1
To verify this controller using Matlab, we can define the original system transfer
function, the phase-lag transfer function, and generate both the root locus and step
response plots where each plot includes both the uncompensated and compensated
system. The Matlab commands are given below.
%Program commands to generate Root Locus and Step Response Plots
%for Phase-lag example
clear;
K=1.13;
numc=[1 0.4]; Place zero at -0.4
denc=[1 0.02]; %Place pole at -0.02
num=K*15; %Open loop system numerator
den=conv([1 3],[1 5]); %System denominator
sysc=tf(numc,denc); %Controller transfer function
sys1=tf(num,den) %System transfer function
sysall=sysc*sys1 %Overall system in series
rlocus(sys1); %Generate original Root Locus Plot
sgrid(0.707,0) %Draw line of constant damping ratio on plot
k=rlocfind(sys1); %Verify gain at zeta=0.707
hold;
rlocus(sysall); %Add compensated root loci to plot
hold;
figure; %Open a new figure window
step(feedback(sys1,1)); %Generate step response of CL uncompensated system
hold; %Hold the plot
step(feedback(sysall,1)); %Generate step response of CL compensated system
hold; %Release the plot (toggles)
When these commands are executed, we can verify the design from the previous
example. The first comparison is the uncompensated and compensated root locus
plot shown in Figure 52.
It is clear in Figure 52 that the only effect from adding the phase-lag compen-
sator is that the asymptotic root loci paths are slightly curved as we move further
242 Chapter 5
Figure 52 Example: Matlab root locus plot for proportional and phase-lag systems.
from the real axis. When implementing and observing this system, the effects would
not be noticeable relative to the root locus path being discussed. What we must
remember is that we now have a pole and zero near the origin and, as we see for
this example, becomes the dominant response. To compare the transient responses,
the feedback loop was closed for the proportional and phase-lag compensated sys-
tems and Matlab used to generate unit step responses for the systems. The two
responses are given in Figure 53. From the two plots our earlier analysis in
Example 5.13 is verified. The uncompensated (relative to phase-lag compensation)
Figure 53 Example: Matlab step responses for proportional and phase-lag compensated
systems.
Analog Control System Design 243
system, when tuned for a damping ratio equal to 0.707, behaves as predicted with a
very slight overshoot and with a very large steady-state error.
The steady-state error, predicted to be just less than 50% in Example 5.13, is
found to be just less than 50% in the Matlab step response. When the phase-lag
compensator is added, the steady-state error is reduced to less than 5%, as desired,
and we also see the effects of adding the slower pole near the origin as it dominates
our overall response. It would be appropriate, since we are allowed 5% overshoot, to
increase the gain in the compensated system, as shown in Figure 54, and make the
response better during the transient phases while meeting our requirement of 5%
overshoot.
By increasing the proportional gain by a factor of 4 the total response, as the
sum of the second-order original and compensator, has a shorter settling time, less
overshoot, and better steady-state error performance.
Phase-lag controllers are easily implemented with common electrical compo-
nents (Chapter 6) and provide an alternative to the PI controller when reducing
steady-state errors in systems.
5.5.2.2 Outline for Designing Phase-Lead Controllers in the s-Domain
For a phase-lead controller the steps are slightly different since the goal is to move
the existing root loci paths to more stable or desirable locations. As opposed to the
phase-lag goals, we now want to modify the root locus paths and move them to a
more desirable location. In fact, the beginning point of phase-lead design is to
determine the points where we want the loci paths to go through. The steps to
help us design typical phase-lead controllers are listed below.
Step 1: Calculate the dominant complex conjugate pole locations from the
desired damping ratio and natural frequency for the controlled system. These values
might be chosen to meet performance specifications like peak overshoot and settling
time constraints. Once peak overshoot and settling time are chosen, we can convert
Figure 54 Example: Matlab step response of phase-lag compensated system (additional gain
of 4).
244 Chapter 5
them to the equivalent damping ratio and natural frequency and finally into the
desired pole locations.
Step 2: Draw the uncompensated system poles and zeros and calculate the total
angle between the open loop poles and zeros and the desired poles. Remember that
the angle condition requires that the sum of the angles be an odd multiple of 180
degrees. The poles contribute negative phase and zeros contribute positive phase.
Use these properties of the angle condition to calculate the angle that the phase-lead
controller must contribute. These calculations are performed as demonstrated when
calculating the angle of departures/arrivals using the root locus plotting guidelines.
Step 3: Using the calculated phase angle required by the controller, proceed to
place the zero of the controller closer to the origin than the pole so that the net angle
contributed is positive, assuming phase angle must be added to the system to stabi-
lize it. Figure 55 illustrates how the phase-lead controller adds the phase angle.
1 2 1 2
b tan 63:4 degrees and 90 tan
32 21
153:4 degrees
Thus this configuration of the phase-lead controller will add a net of 90 degrees to
the system.
Step 4: Draw the new root locus including the phase-lead controller to verify
the design. If our calculations are correct, the new root locus paths should go directly
through our desired design points. Finish by applying the magnitude condition to
find the proportional gain K required for that position along the root locus paths.
Phase-lead controllers are commonly added to help stabilize the system, and
therefore the desired effect requires adding phase to the system. Since we are adding
a pole and a zero to the system, the net phase change is positive as long as our
controller zero is closer to the origin than our controller pole.
EXAMPLE 5.15
Design a controller to meet the following performance requirements for the system
shown in Figure 56. Note that this system is open loop marginally stable and needs a
controller to make it stable and usable.
A damping ratio of 0.5
A natural frequency of 2 rad/sec
Since this problem is open loop unstable we need to modify the root loci and move
them further to the left. Thus a phase-lead controller is the appropriate choice.
Step 1: The desired poles must be located on a line directed outward from the
origin at an angle of 60 degrees (relative to the negative real axis) to achieve our
desired damping ratio of z 0:5. The radius must be 2 to achieve our natural
frequency of on 2 rad/sec. Taking the tangent of 60 degrees means that our ima-
ginary component must be 1.73 times the real component while the radius criteria
means that the square root of the sum of the imaginary and real components squared
must equal 2 (Pythagorean theorem). An alternative, and simpler, method is to
realize that the real component is just the cosine of the angle (or damping ratio
itself) multiplied by the radius and the imaginary component is simply the sin of
the angle multiplied by the radius. The points that meet these requirements are
s1;2 1 1:73j
Step 2: The total angle from all open loop poles and zeros must be an odd
multiple of 180 degrees, to meet the angle condition in the s plane. For our system
in this example we have two poles at the origin each contributing 120 degrees and
one pole at 10 contributing 10:9 degrees. These add to 250:9 degrees and if s
1 1:73j is to be a valid point along our root locus plot we need to add an
additional 70:9 degrees of phase angle to be back at 180 degrees and meet our
angle condition. These calculations are shown graphically in Figure 57. Angle from
OL poles: 120 120 tan1 (1.73/9) 250:9; angle required by controller:
180 250:9 70:9.
Step 3: To add the 70:9 degrees of phase we need to add the zero and pole
such that the zero is closer to the origin than the pole and where the angle from the
zero (which contributes positively) is 70.9 degrees greater than the angle introduced
by the pole (which contributes negatively). A solid first iteration would be to place
the zero at s 1 where it contributes exactly 90 degrees of phase angle into the
system. Since we now know that the controller pole must add 19.1 degrees, we can
place the pole of the phase-lead controller as shown in Figure 58. Placing a zero at
1 adds tan1:73=0 90 degrees. Then the pole must contribute 19:1 degrees and
tan1 (1.73/d 19:1 degrees, or d 5. So the pole must be placed at p 6. Our
final phase-lead controller becomes
!
s1 s1
Gc Phase lead 6
s6 1= s 1
6
Step 4: To verify our design we add the zero and pole from the phase-lead
controller to the s-plane and develop the modified root locus plot. We still must
include our two poles at the origin and the pole at 10 from the open loop system
transfer function.
For the root locus plot we have four poles and one zero, thus three asymptotes
at 180 degrees and 60 degrees. The asymptote intersection point is at s 5 and
the valid sections of real axis fall between 1 and 6 and to the left of 10 (also the
asymptote). This allows us to approximate the root locus plot as shown in Figure 59.
Without the phase-lead controller the two poles sitting at the origin immedi-
ately head into the RHP when the loop is closed and the gain is increased. Once the
controller is added, enough phase angle is contributed that it pulls the loci paths
into the LHP before ultimately following the asymptotes back into the RHP. As
designed, the phase-lead causes the paths to pass through our desired design points
of s 1 1:73j. As many previous examples have shown, we can apply the mag-
nitude condition to solve for the required gain K at these points.
EXAMPLE 5.16
Verify the phase lead compensator designed in Example 5.15 using Matlab. Recall
that our loop transfer function, as given in Figure 56, is
1
GHs
s2 0:1s 1
step(feedback(sysc*sysp,
sysf),tsys); %Generate step response of CL compensated system
hold; %Release the plot (toggles)
When the commands are executed, the first result is the root locus plot for the
compensated and uncompensated system, given in Figure 60.
As desired, the unstable open loop poles are attracted to our desired loca-
tions when the phase-lead compensator is added to the system. The system still
proceeds to go unstable at higher gains. The complex poles dominate the response
of this system since the third pole is much further to the left (much faster). The
location of the zero, however, will affect the plot and our response is still not a true
second-order response. When the uncompensated and compensated systems are both
subjected to a unit step input, the results of the root locus plot become very clear as
the uncompensated system goes unstable. The two responses are given in Figure 61.
Using Matlab, we see that the phase-lead controller did indeed bring the new
loci through our desired design point and resulted in a stable system. The step
response of our compensated system is well behaved and quickly decays to the
desired steady-state value. For any controller to behave as simulated, we must
remember that it assumes linear amplifiers and actuators capable of moving the
system as predicted. The more aggressive of response we design for the more power-
ful actuators that are required when implementing the design (and thus more costly).
There are realistic constraints to how fast we actually want to design our system to
be.
5.5.2.3 Outline for Designing Lag-Lead Controllers in the s-Domain
When both transient and steady-state performances need to be improved, we may
combine the two previous compensators into what is commonly called lag-lead. The
Figure 60 Example: Matlab root locus plot for phase-lead compensated system.
Analog Control System Design 249
the phase-lag portion by placing the pole and zero near the origin as described in
Section 5.5.2.1.
Step 3: Draw the new root locus including the complete lag-lead controller to
verify the design. If our calculations are correct, the new root locus paths should go
directly through our desired design points and our steady-state error requirements
should be satisfied. Simulating or measuring the response of our system to the
desired input easily determines the steady-state error.
EXAMPLE 5.17
Using the system represented by the block diagram in Figure 62, design a lag-lead
controller to achieve a closed loop system damping ratio equal to 0.5 and a natural
frequency equal to 5 rad/sec. The system also needs to have a steady-state error of
less than 1% while following a ramp input. To design this controller, we will use the
lead portion to place the poles and the lag portion to meet our steady-state error
requirements.
Step 1: Using the damping ratio and natural frequency requirements, we know
that our poles should be at a distance 5 from the origin on a line that makes an angle
of 60 degrees with the negative real axis. This results in desired poles of
s1 ;2 2:5 4:3j
The total angle from all open loop poles and zeros must be an odd multiple of 180
degrees to meet the angle condition in the s-plane. For our system in this example we
have a pole at the origin contributing 120 degrees and one pole at 1 contributing
109.1 degrees. These add to 229.1 degrees and if s 2:5 4:3j is to be a valid point
along our root locus plot, we need to add an additional 49:1 degrees of phase angle to
be back at 180 degrees and meet our angle condition. This can be achieved by placing
our zero at 2.5 and our pole at 7:5. These calculations are shown graphically in Figure
63. Placing a zero at 2:5 adds tan(4.33/0)=+90 degrees. Then the pole must contribute
40:9 degrees and tan1 (4.33/d) 40:9 degrees, or d 5. So the pole must be placed at
p 7:5.
The last task in this step is to calculate the proportional gain required to move
us to this location on our plot. To do so we apply the magnitude condition (from the
open loop poles at 0, -1, and 7.5 and zero at 2.5) using our desired pole location.
5 s z 1 5j4:33j
K K ppp
s p1 s p2 s p3 2:5 4:3 1:52 4:32 52 4:32
2 2
21:65
K ppp j1j
25 20:74 43:75
K7
EXAMPLE 5.18
Use Matlab to verify the root locus plot and time response of the controller and
system designed in Example 5.17. Recall that the goals of the system include a
damping ratio equal to 0.5, natural frequency of 5 rad/sec, and less than 1%
steady-state error to a ramp input.
To verify the design we will define the uncompensated and compensated system in
Matlab, generate a root-locus plot for the compensated system, and generate time
252 Chapter 5
response (from a ramp input) plots of the uncompensated and compensated systems.
The commands used are included here.
These commands define the compensated and uncompensated system from Example
5.17 and proceed to draw the root locus plot given in Figure 64. We see that the lag-
lead controller does indeed move our locus paths to go through our desired design
points of s1 ;2 2:5 4:3j, giving us our desired damping ratio of 0.5 and natural
frequency of 5 rad/sec. The uncompensated root locus plot follows the asymptotes at
0.5 and is not close to meeting our requirements.
To verify the steady-state error criterion we will use Matlab (commands
already given above) to generate the ramp response of the uncompensated and
compensated system. The results are given in Figure 65. Except for a very short
initial time, the compensated system follows the desired ramp input nearly exactly.
The uncompensated physical system response has a much larger settling time and
steady-state error.
This example, in conclusion, illustrates the use of phase-lead and phase-lag
compensation to modify the transient and steady-state behavior of our systems.
As discussed in the introduction to this section, these controllers share many attri-
butes with the common PID controller where the integral gain is used to control
steady-state errors and the derivative gain to modify transient system behavior.
Looking at the parallel design methods as they occur in the frequency domain will
conclude this section on phase-lag and phase-lead controllers.
Analog Control System Design 253
compare the phase-lead and PD controllers, given in Figure 67, we see the same
parallels.
The corresponding magnitude and angle contributions for the phase-lead and
PD are as follows:
T s1 T1
Phase lead K 1 tan1 T1 o tan1 T2 o a 20 log
T2 s 1 T2
K o
PD Kp 1 d s tan1
Kp K p
K d
Whereas the phase-lag and PI controllers differed only at low frequencies, here
we see that phase-lead and PD controllers only vary at higher frequencies. A phase-
lead controller does not have infinite gain at high frequencies as compared to the PD
and only contributes positive phase angle over a range of frequencies. The PD
controller continues to contribute 90 degrees of phase angle at all frequencies greater
than Kp =Kd . It is likely that the phase-lead controller will handle noisy situations
better than pure derivatives (i.e., PD) since it does not have as large of gain at very
high frequencies.
To obtain the Bode plot for the lag-lead controller, since we are working in the
frequency domain and the lag and lead terms multiply, we simply add the two curves
together as shown in Figure 68. As before, the lag-lead compensation is compared
with its equivalence, the common PID. Since the lag-lead compensation is the sum-
mation of the separate phase-lag and phase-lead, the same comments apply. The lag-
lead controller has limited gains at both high and low frequencies, whereas the PID
has infinite gain at those frequencies. Similarly, the lag-lead controller only contri-
butes to the phase angle plot at distinct frequency ranges while the PID begins
adding 90 degrees and ends at 90 degrees.
EXAMPLE 5.19
Using the system represented by the block diagram in Figure 69, design a controller
that leaves the existing dynamic response alone but that achieves a steady-state error
from a step input of less than 2%. Without any controller Gc 1, we can close the
loop and determine current damping ratio and natural frequency.
Cs 75
3 2
Rs s 9s 25s 100
Thus, without compensation the system currently has a damping ratio equal to 0.21
and a natural frequency of 3.7 rad/sec.
We can calculate the unit step input steady-state error from either the block
diagram or closed loop transfer function. Using the block diagram, we see that we
have a type 0 system with a gain of 75/25, or 3. Since the steady-state error equals
1=K 1 we will have a steady-state error from a step input of one fourth, or 25%.
This agrees with what can find by closing the loop and applying the final value
theorem to see that the steady-state output becomes 75/100, or an error of 25%.
To achieve a steady-state error less than 2%, we need to have a total gain of 49
in our system ess 1=K 1. Since we have a gain of 3, we need to add approxi-
mately another gain of 17 (giving a total of 51) to meet our error requirement. To do
this we will add a phase-lag controller with a pole at 0.02 and the zero at 0.34,
giving us an additional gain of 17 in our system. To begin with, let us examine our
open loop uncompensated Bode plot and our compensator Bode plot to see how this
is achieved. The uncompensated system plot is given in Figure 70 and the phase-lag
contribution is given in Figure 71.
The phase-lag compensator is
Figure 70 Example: Bode plot of OL uncompensated system (GM 8.5194 dB [at+5 rad/
sec], PM 39.721 deg. [at 33.011 rad/sec]).
s 0:34
Phase lag
s 0:02
This gives us our additional gain of 17, or 24.6 dB (20 log 17), as shown in Figure 71.
We see in Figure 70 that the uncompensated open loop system has a gain
margin equal to 8.5 dB and a phase margin equal to 39.7 degrees (at a crossover
frequency of 3 rad/sec). Since we do not wish to significantly change the transient
response of the system, the margins should remain approximately the same even
after we add the phase-lag compensator.
In Figure 71, the Bode plot for the phase-lag compensator, we see that at low
frequencies approximately 25 dB of gain is added to the system, thereby decreasing
the steady-state error. The phase-lag term does not change our original system at
higher frequencies, as desired. For a period of frequencies the phase-lag compensator
does tend to destabilize the system since it contributes additional negative phase
angle. If it occurs at low enough frequencies, it will not significantly change our
original gain margin and phase margin.
When we apply the phase-lag controller to the system and generate the new
Bode plot, given in Figure 72, we can again calculate the margins to verify our
design. Recalling that our uncompensated system has a gain margin equal to 8.5
dB and a phase margin equal to 39.7 degrees (at a crossover frequency of 3 rad/sec),
the compensated system now has a gain margin equal to 7.5 dB and a phase margin
equal to 33.4 degrees (at a crossover frequency still at 3 rad/sec). So although they
are slightly different (tending slightly more towards marginal stability), they did not
change significantly and should not noticeably affect the transients. If we overlay all
three Bode plots as shown in Figure 73, it is easier to see where the phase-lag term
affects our system and where it does not.
When the effects from the phase-lag compensator are examined alongside the
original system, we see how it only modifies the original system at the lower fre-
quencies and not at higher frequencies. The phase-lag term raises the magnitude plot
enough at the lower frequencies to meet our steady-state error requirements and
adds some negative phase angle, but only over a range of lower frequencies. It is also
clear in Figure 73 how the uncompensated open loop (OL) system and the phase-lag
compensator add together and result in the final Bode plot.
Perhaps the clearest way of evaluating our design is by simulating the uncom-
pensated and compensated system as shown in Figure 74. The step responses clearly
show the reduction in the steady-state error that results from the addition of the
phase-lag controller. The uncompensated system reaches a steady-state value of 0.75,
Figure 72 Example: Bode plot of phase-lag compensated OL system (GM 7.47 dB [at
4.741 rad/sec], PM 33.399 deg. [at 3.0426 rad/sec]).
Analog Control System Design 259
as predicted earlier in this example using the FVT. When the phase-lag controller is
added, the response approaches the desired value of 1. The percent overshoot, rise
time, peak time, and settling time were not significantly changed after the compen-
sator was added (this was one of the original goals).
The Matlab commands used to calculate the margins and verify the system
design are included here for reference.
%Program commands to generate Bode plots
% for phase-lag exercise
clear;
EXAMPLE 5.20
Given the system represented by the block diagram in Figure 75 (see also Example
5.19), use Bode plots to design a controller that modifies the dynamic response to
achieve a damping ratio of 0.5 and a natural frequency of 8 rad/sec. Without any
controller (Gc 1), we can close the loop and determine current damping ratio and
natural frequency.
Cs 75
Rs s3 9s2 25s 100
Thus, without compensation the system currently has a damping ratio equal to
0.21 and a natural frequency of 3.7 rad/sec where the goals of the controller (z 0:5
and on 8 rad/sec) are to approximately double the damping ratio and natural
frequency of the uncompensated system. These changes will reduce the overshoot
and settling time of the system in response to a step input.
To begin with, we first need to relate our closed loop performance requirements to
our open loop Bode plots. Once we know what our desired Bode plot should be, we
can find where the uncompensated OL system is lacking and design the phase-lead
controller to add the required magnitude and phase angle to make up the differences.
Since we want our compensated system to have a damping ratio equal to 0.5, we
know that we need a phase margin approximately equal to 50 degrees (see Figure 48,
Chap. 4), which needs to occur at our crossover frequency. We find our crossover
frequency from the damping ratio and natural frequency requirements. Using Figure
49 from Chapter 4, we find the oc =on 0:78 at a damping ratio of 0.5. Knowing that
we want a closed loop natural frequency, on , equal to 8 rad/sec, means that open
loop crossover frequency should equal approximately 6.3 rad/sec. Now we can define
our requirements in terms of our compensated OL Bode plot measurements:
Our first break occurs at 1=T1 , or 1.25 rad/sec, and the controller adds positive phase
angle and magnitude, resulting in Figure 76.
Examining the Bode plot we see that our goal of adding approximately 15 dB
of magnitude and 70 degrees of phase angle at our desired crossover frequency of
6.3 rad/sec is achieved. When this response is added to the uncompensated OL
systems response, the result should be a phase margin of 50 degrees at oc 6:3
rad/sec. To verify that this is indeed the case, let us generate the Bode plot for the
compensated system and measure the resulting phase margin. This plot is given in
Figure 77 where we see that when the two responses are added we achieve a phase
margin of 52 degrees at a crossover frequency of 6.7 rad/sec, slightly exceeding our
performance requirements.
It is easier to see how the phase-lead compensator adds to the original system
in Figure 78, where all the terms are plotted individually. We see that at low and high
frequencies there is little change to the original system, and that the phase-lead
262 Chapter 5
Figure 76 Example: Bode plot of phase-lead compensator (phase lead T1 0:8, T2 0:02).
compensator adds the desired phase angle and magnitude at the range of frequencies
determined by the T1 and T2 .
Finally, to verify the design in the time domain, let us examine the step
responses of the uncompensated and compensated systems, given in Figure 79. As
we expected, and hoped for, the compensated system exhibits less overshoot and
shorter response times (rise, peak, and settling times) than the uncompensated sys-
tem in response to unit step inputs. On a related note, however, we see that the
steady-state performance is not improved, as it was with phase-lag controller. To
address this issue, we look at one final example where the phase-lag and phase-lead
Figure 77 Example: Bode plot of phase-lead compensated system (GM 17.482 dB [at
20.053 rad/sec], PM 52:101 deg. [at 6.738 rad/sec]).
Analog Control System Design 263
terms are combined as a lag-lead controller. Also, it is worth remembering that there
is a price to pay for the increased performance. To implement the phase-lead con-
troller, we may need more expensive physical components (amplifiers and actuators)
capable of generating the response designed for. For example, if we were to plot the
power requirements for each response, we would find that the compensated system
demands a much higher peak power to achieve the faster response. The goal is not
usually to design the fastest controller but one that balances the economic and
engineering constraints.
The Matlab commands used to generate the plots and measure the margins are
given below.
Figure 79 Example: closed loop step responses of original and phase-lead compensated
systems.
264 Chapter 5
EXAMPLE 5.21
Use Matlab to combine the phase-lag and phase-lead controllers from Examples 5.19
and 5.20, respectively, and verify that both the steady-state and transient require-
ments are met when the controllers are implemented together as a lag-lead. Generate
both the Bode plot (with stability margins) and step responses for the lag-lead
compensated OL system.
Recall that we were able to meet our steady-state error goal of less than a
2% error resulting from a step input using the phase-lag controller and our
transient response goals of a closed loop damping ratio equal to 0.5 and a
natural frequency equal to 8 rad/sec using the phase-lead controller. Combining
them should enable us to meet both requirements simultaneously. The final lag-
lead controller becomes
s 0:34 0:8 s 1
Lag-lead
s 0:02 0:02 s 1
To verify the response we will use Matlab to generate the compensated OL Bode plot
and the resulting step response plot. The commands used to generate these plots are
as follows:
Analog Control System Design 265
Figure 80 Example: Bode plot of lag-lead compensated system (GM 17:073 dB [at 19.602
rad/sec], PM 49:337 deg. [at 6.7435 rad/sec]).
Figure 81 Example: Bode plot of individual terms (lag, lead, original, and final systems).
Analog Control System Design 267
controllable systems, all poles can be placed wherever we want them (assuming the
physics are possible). Controllability is determined by finding the rank of the con-
trollability matrix:
If the rank is less than the system order (or size of A), the system is controllable.
Essentially, the system is controllable when all of the column vectors are linearly
independent and each output is affected by at least one of the inputs. If one or more
of the outputs are not affected by an input, then no matter what controller we design,
we cannot guarantee that all the states will be well behaved, or controlled. Now we
can review the original state space matrices:
dx=dt A x B u
where we have an arbitrary input u. If we close the loop and introduce feedback into
our system, our additional input into the system becomes
u K x
A modified system matrix, Ac , is formed which includes the control law. Since our
controller gains now appear in the system matrix, the eigenvalues (poles) of Ac can
be matched to the desired poles by adjusting the gains in the K vector. This method is
shown in the following example and assumes that all states are available for feed-
back.
EXAMPLE 5.22
Using the differential equation describing a unit mass under acceleration, determine
the state space model and design a state feedback controller utilizing a gain matrix to
place the poles at s 1 1j. Note that the open loop system is marginally stable
and the controller, if designed properly, will stabilize it and result in a system damp-
ing ratio equal to 0.707 and a natural frequency equal to 1.4 rad/sec. The differential
equation for a mass under acceleration is where c is the position of the mass, m, and r
is the force input on the system:
d 2c 1
r
dt2 m
Analog Control System Design 269
First we must develop the state matrices. To do so, let the first state x1 be the
position and the second state x2 the velocity. Then x1 c and the following matrices
are determined:
x_ 1 x2 c_
1
r
x_ 2 c
m
x_ 1 0 1 x1 0
r
x_ 2 0 0 x2 1=m
To determine if it is controllable we take the rank of the controllability matrix M.
The first column is the B vector and the second column is the vector resulting from
A B. Since the resulting M matrix is nonsingular and rank 2, the system is
controllable.
0 1=m
M BjAB ; rank 2; controllable
1=m 0
Using the controller form developed above:
dx=dt A x B K x A B K x
The control law matrix, B K, is
0 0 0
BK k1 k2
1=m k1 =m k2 =m
The new system matrix (i.e., poles and zeros) is
s 0 0 1 0 0 s 1
jsI A BKj
0 s 0 0 k1 =m k2 =m k1 =m s k2 =m
The characteristic equation becomes a function of the gains:
ss k2 =m k1 =m s2 k2 =ms k1 =m
To solve for the gains that are required to place our closed loop poles at s 1 1j,
we can multiply the two poles to get the desired characteristic equation and compare
it with the gain dependent characteristic equation. Our desired characteristic equa-
tion is
s 1 1js 1 1j s2 2s 2
To place the poles using k1 and k2 we simply compare coefficients. Thus by inspec-
tion from the s0 term, k1 =m 2, and from the s1 term, k2 =m 2. Therefore,
k1 2 m and k2 2 m
For example, if we have a unit mass equal to 1, then the desired gain matrix becomes
K 2 2
As a reminder, although for controllable systems we can place the poles wherever we
wish, it is always dependent on having those states available as feedback, either as a
270 Chapter 5
measured variable or through the use of estimators. In this example it means having
both position and velocity signals available to the controller.
For higher order systems it is advantageous to use the properties of linear
algebra to solve for the gain matrix. Ackermanns formula allows many computer-
based math programs to solve for the gain matrix even for large systems. Deferring
the proofs to other texts in the references, we can define the gain matrix, K, equal to
K 0 0 0 0 1 M1 Ad
where K is the resulting gain matrix; M is the controllability matrix that, if the system
is controllable, is not singular and has an inverse that exists; and Ad is the matrix
containing the information about our desired poles. It is formed as shown below.
Desired characteristic equation:
sn a1 sn an1 s an 0
And using the original A matrix:
Ad An a1 An1 an1 A an I
Many computer packages with control system tools have Ackermanns formula
available and thus we would only have to supply the desire poles and the system
matrices A and B to have the gain matrix calculated for us. More examples using
state space techniques are presented in Chapter 11.
EXAMPLE 5.23
Use Matlab to verify the state feedback controller designed in Example 5.22. Recall
that the physical system was open loop marginally stable, described by a force input
to a mass without damping or stiffness terms. The resulting state space matrices for
the system are
x_ 1 0 1 x1 0
r
x_ 2 0 0 x2 1=m
To design the controller we will first define the state matrices in Matlab along
with the desired pole locations. Then controllability and the resulting gain matrix
can be solved using available Matlab commands, shown below.
%Pole placement controller design for State Space Systems
m=1; %Define the mass in the system
A=[0 1;0 0]; %System matrix, A
B=[0;1/m]; %Input matrix, B
C=ctrb(A,B) %Check the controllability of the system
rank(C) %Check the rank of the controllability matrix
det(C) %Determinant must exist for controllability
%Rank = system order for controllable systems
P=[-1+j,-1-j]; %Vector of desired pole location
K=place(A,B,P) %Calculate the gain matrix using the place command
Analog Control System Design 271
When the commands are executed Matlab returns the A and B system matrices, the
controllability matrix, C, and the rank and determinant of C. Finally, the desired
pole locations are defined and the place command is used to determine the required
gain matrix K.
Output from Matlab gives the controllability matrix as
C
0 1
1 0
with the rank of C equal to 2 and the determinant of C equal to 1. Either method
may be used to check for controllability since the rank of a matrix is the largest
square matrix contained inside the original matrix that has a determinant. Since the
rank of C equals 2, which is the size of C, the determinant of the complete matrix
exists, as given by the det command, showing that C is indeed nonsingular.
Finally, after defining our desired pole locations as 1 1j, the place com-
mand returns:
K
2:0000 2:0000
This corresponds exactly with the gain matrix solved for manually in the previous
example. Since we are working with matrices, the Matlab code given in this example
is easily applied to larger systems. The only terms that must be changed are the A
and B system matrices and the vector P containing our desired pole locations.
To summarize this section on pole placement techniques with state space
matrices, we should recognize that the same effect of placing the poles for this system
could be achieved by using a PD controller algorithm. If we were to add velocity
feedback, as would be required for the state space design, the same design would
result. The reason this section is included is to introduce the topic as it relates to
similar design methods and systems (LTI, single inputsingle output) already pre-
sented in this chapter. Where state space techniques become valuable are when
dealing with larger, nonlinear, and multiple inputmultiple output systems.
Chapter 11 introduces several state space design methods for applications such as
these.
5.7 PROBLEMS
5.1 Briefly describe the typical goals for each term in the common PID controller.
What is each term expected to achieve in terms of system performance?
5.2 Describe integral windup and briefly describe a possible solution.
5.3 Briefly describe an advantage and disadvantage of using derivative gains.
5.4 What is the reason for using an approximate derivative?
5.5 List three alternative configurations of PID algorithms and describe why they
are sometimes used.
5.6 What is the assumption made when it is said that the system has dominant
complex conjugate poles?
5.7 To design a system with a damping ratio equal to 0.6 and a natural frequency
equal to 7 rad/sec, where should the dominant pole locations be located in the s-
plane?
272 Chapter 5
5.8 To design a system that reaches 98% of its final value within 4 seconds, what
condition on the s-plane must be met?
5.9 A simple feedback control system is given in Figure 83. As a designer, you have
control over K and p. Select the gain K and pole location p that will give the fastest
possible response while keeping the percentage overshoot less than 5%. Also, the
desired settling time, Ts , should be less than 4 seconds.
5.10 For the system given in the block diagram in Figure 84, determine the K1 and
K2 gain necessary for a system damping ratio z 0:7 and a natural frequency of 4
rad/sec.
5.11 The current system exhibits excessive overshoot. To reduce the overshoot in
response to a step input, we could add velocity feedback, as shown in the block
diagram in Figure 85. Determine a value for K that limits the percent overshoot to
10%.
5.12 Velocity feedback is added to control to add effective damping to the system, as
shown in the block diagram in Figure 86. Determine a value for K that limits the
percent overshoot to 5%.
5.13 Using the plant model transfer function given, design a unity feedback control
system using a proportional controller.
a. Develop the root locus plot for the system.
b. Determine (from the root locus plot and using the appropriate root locus
conditions) the gain K required for a damping ratio 0:2:
5
Gs 2
s 7 s 10
5.14 Using the plant model transfer function, design a unity feedback control sys-
tem using first a proportional controller (K 2) and then a PI controller
(K 2; Ti 1). Draw the block diagrams for both systems and determine the
steady-state error for both systems when subjected step inputs with a magnitude
of 2:
5
Gs
s5
5.15 Use the system block diagram given in Figure 87 to answer the following
questions.
a. If Gc K, what is the steady-state error due to a unit step input?
1
b. If GC K 1 , what is the steady-state error due to a unit step
Ti s
input?
c. Using the PI controller in part b, will the system ever go unstable for any
gains K>0 and Ti > 0? Use root locus techniques to justify your answer.
5.16 Given the block diagram model of a physical system in Figure 88:
a. Describe the open loop system response characteristics in a brief sentence
(no feedback or Gc ).
5.23 With the system in Figure 94 and using the frequency domain, design a PI
controller for the following system that exhibits the desired performance character-
istics. Calculate the steady-state error from a ramp input using your controller gains.
System requirements : fm 52 degrees; oc 10 rad/sec:
5.24 Use the open loop transfer function and frequency domain techniques to
design a PD controller where the phase margin is equal to 40 degrees at a crossover
frequency of 10 rad/sec.
24
Gs
s 42
5.25 Using root locus techniques, design a phase-lead controller so that the system
in Figure 95 exhibits the desired performance characteristics. System requirements:
z 0:35, Tsettling 4 sec.
5.26 Given the system block diagram in Figure 96, design a controller (phase-lag/
lead) to achieve a closed loop damping ratio equal to 0.5 and a natural frequency
equal to 2 rad/sec. Use root locus techniques.
5.27 Using the system shown in the block diagram in Figure 97, design a phase-lag
compensator that does not significantly change the existing pole locations while
causing the steady-state error from a ramp input to be less than or equal to 2%.
5.28 With a third-order plant model and unity feedback control loop as in Figure
98,
a. Design a compensator to leave the existing root locus paths in similar loca-
tions while increasing the steady-state gain in the system by a factor of 25.
b. Where are the close loop pole locations before and after adding the com-
pensator?
c. Verify the root locus and step response plots (compensated and uncompen-
sated) using Matlab.
5.29 Using the system shown in the block diagram in Figure 99, design a compen-
sator that does the following:
a. Places the closed loop poles at 2 3:5j. Define both the required gain and
compensator pole and/or zero locations.
b. Results in a steady-state error from a ramp input that is less than or equal to
1.5%.
c. Verify your design (root locus and ramp response) using Matlab.
5.30 Given the open loop system transfer function, design a phase-lag controller to
increase the steady-state gain in the system by a factor of 10 while not significantly
decreasing the stability of the system. Include
a. The block diagram of the system with unity feedback.
b. The open loop uncompensated Bode plot, gain margin, and phase margin.
c. The transfer function of the phase-lag compensator.
d. The compensated open loop Bode plot, gain margin, and phase margin.
20
Gs
s 1s 2s 3
5.31 Given the open loop system transfer function, design a phase-lead controller to
increase the system phase margin to at least 50 degrees and the gain margin to at
least 10 dB. Include
a. The block diagram of the system with unity feedback.
b. The uncompensated open loop Bode plot, gain margin, and phase margin.
c. The transfer function of the phase-lead compensator.
d. The compensated open loop Bode plot, gain margin, and phase margin.
1
Gs
s2 s 5
5.32 Using the system shown in the block diagram in Figure 100, design a compen-
sator that does the following
a. Results in a phase margin of at least 50 degrees and a crossover frequency of
at least 8 rad/sec.
b. Results in a steady-state error from a step input that is less than or equal to
2%.
c. Verify your design (Bode plot and step response) using Matlab.
5.33 Given the differential equation describing a mass and spring system, determine
the state space model and design a state feedback controller utilizing a gain matrix to
place the poles at s 1 1j.
d 2 y dy
2 r
dt2 dt
d 3y d 2y dy
3
4 2 3 2y r
dt dt dt
6.1 OBJECTIVES
Introduce the common components used when constructing analog control
systems.
Learn the characteristics of common control system components.
Develop the knowledge required to implement the controllers designed in
previous chapters.
6.2 INTRODUCTION
Until now little mention has been made about the actual process and limitations in
constructing closed loop control systems. A paper design is just as it states: no
physical results. This chapter introduces the basic components that we are likely
to need when we move from design to implementation and use. The fundamental
categories, shown in Figure 1, may be summarized as error detectors, control action
devices, amplifiers, actuators, and transducers.
The goal of this chapter is to introduce some common components in each
category and how they are typically used when constructing control systems. The
amplifiers and actuators tend to be somewhat specific to the type of system being
controlled. There are physical limitations associated with each type, and if the wrong
one is chosen, the system will not perform well no matter how our controller
attempts to control system behavior. Amplifiers, as the name implies, tend to simply
increase the available power level in the system. The actuators are then designed to
use the output of amplifiers to effect some change in the physical system. If our
actuator does not cause the output of the physical system to change (in some pre-
dictable manner), the control system will fail.
The control action devices provide the desired features discussed in previous
chapters. How do we actually implement the proportional-integral-derivative (PID),
or phase-lag, or phase-lead controller that works so well in our modeled system?
Two basic categories include electrical devices and mechanical devices. Electrical
devices will be limited to analog in this chapter and later expanded to include the
rapidly growing digital microprocessor-based controllers. For the most part the
operational amplifier is the analog control device of choice. It is supported by a
279
280 Chapter 6
multiple array of filters, saturation limits, safety switches, etc. in the typical con-
troller. If fact, you may have to search the circuit board just to find the chips
performing the basic control action; the remaining components add the features,
safety, and flexibility required for satisfactory performance. The controller typogra-
phies presented in previous chapters can all be implemented quite easily using opera-
tional amplifiers.
Mechanical controllers utilize the feedback of the actual physical variable to
close the loop. Example variables include position (feedback linkages), speed (cen-
trifugal governor), and pressure. In these controllers a transducer is generally not
required, and they may operate independently of any electrical power. We are
obviously constrained by physics as to what mechanical controller feedback systems
are possible. There are many mechanical controllers still in use and providing reliable
and economical performance.
As the move is made to electronic controllers, the importance of transducers,
actuators, and amplifiers is increased. While actuators are still required in mechan-
ical feedback systems (i.e., hydraulic valve), transducers and amplifiers generally
include supporting electrical components. To have an electrical component repre-
senting the summing junction in the block diagram, we must be able to provide an
electrical command signal and feedback signal (proportional to the actual controlled
variable). The output of such controllers is very low in power (generally current
limited) and depends on linear amplifiers capable of causing a physical change in
the actuator and ultimately in the physical output of the system. Sometimes posing
an even greater problem is the transducer. The lack of suitable transducers has in
many cases limited the design of the perfect controller. For a system variable to be
controlled, it must be capable of being represented by an appropriate electrical signal
(i.e., a transducer). The goal of this chapter is to provide information on the basic
components found in the four categories (controller, transducer, actuator, and
amplifier).
could not construct a controller ourselves, most of the components are the additional
filters, amplifiers, and safety functions. The simple circuits presented here still per-
form quite well in some conditions. Manufactured controller cards have multiple
features, range switches for gains, robust filtering, and often include the final stage
amplification and thus appear much more complex than what is required for the
basic control actions themselves. The additional features are usually designed for the
specific product and in many cases make it desirable over building our own. Even
then, in many large control systems (i.e., an assembly line) this controller might only
be a subset of the overall system and we would still be responsible for the overall
system performance.
The following circuits in Table 1 illustrate the basic circuits used in the com-
mon controller typographies examined and designed in Chapter 5. Each controller
utilizes the basics: i.e., inverting and noninverting amplification, summing, differ-
ence, integrating, and differentiating circuits to construct the proper transfer func-
tion for each controller. Additional information on OpAmps is given in Section
6.6.1. These circuits can be found in most electrical handbooks along with the
calculations for each circuit. Using capacitors in the OpAmp feedback loop inte-
grates the error and using capacitors in parallel with the error signal differentiates the
error. Picking different combinations of resistors chooses the pole and zero locations.
Potentiometers are commonly used to enable on-line tuning. Remember that final
drivers (i.e., power transistors or pulse-width-modulation [PWM] circuits) are
required when interfacing the circuit output with the physical actuator.
Many issues must be considered when using the circuits given in Table 1,
discussed here in terms of internal and external aspects. Internally, that is, between
the error input and controller output connections, several improvements are com-
monly made when implementing the circuits. In terms of controller design, there are
realistic constraints internal to the circuit as to where poles and zeros may feasibly be
placed. Since every resistor and capacitor is not an ideal component, we have limited
values and combinations that work in practice. For example, to place a zero and/or
pole very close to the origin, as common in phase-lag designs, we would need to find
resistors and capacitors with very large values, a challenge for any designer.
Second, to avoid integral windup (collecting to much error and having to
overshoot the desired location to dissipate error), we might consider adding diodes
to the circuit to clamp the output at acceptable levels. Even if it is beneficial to
accumulate more error using the integral term, we will always have limited output
available from each component before it saturates. Other times it is also common to
include an integral reset switch that discharges the capacitor under certain condi-
tions.
Finally, internal problems arise with building pure derivatives using OpAmps
because of resulting noise and saturation problems. As shown in Figure 2, it is
common to add another resistor in series with the capacitor that, when the equations
are developed, results in adding a first-order lag term to the denominator. The new
controller transfer function becomes
Controller output R2 Cs
Error R1 Cs 1
The modified transfer function should be familiar as conceptually it was already
presented and discussed in Section 5.4.1 as the approximate derivative transfer func-
282 Chapter 6
P R4 R2
R3 R1
PI R4 R2 R2 C2 s 1
R3 R1 R2 C2 s
PD R4 R2
R C s1
R3 R1 1 1
PID R4 R2 R1 C1 s 1R2 C2 s 1
R3 R1 R2 C2 s
Lead or R4 R2 R1 C2 s 1
lag R3 R1 R2 C2 s 1
Lag-lead R4 R2 R1 R5 C1 s 1
:
R3 R1 R2 R6 C2 s 1
R6 C2 s 1
R5 C1 s 1
Analog Control System Components 283
tion. Figure 7 of Chapter 5 presented the output of the approximate derivative term
in response to a step input. To add this function to the PD controller from Table 1,
we can modify the circuit and insert the extra resistor as shown in Figure 3.
Now when we develop the modified transfer function for the controller, we can
examine the overall effect of adding the resistor.
R4 R2 R2 R1 Cs 1
PDAPPR
R3 R1 R5 R1 R5
Cs 1
R1 R5
We still place our zero from the numerator, as before, but we also have added a pole
in the denominator, as accomplished in the approximate derivative transfer function.
The interesting result comes from comparing the modified PD with a phase-lead
controller. Although how we adjust them is slightly different, we find that both
algorithms place a zero and pole and functionally are the same controller. This
agrees with earlier observations made about the benefits of using phase-lead over
derivative compensation because of better noise attenuation at high frequencies.
Both the modified PD and phase-lead terms would have similar shapes to their
respective Bode plots.
Moving on to several external aspects, it is important to realize that even if we
now get our internal circuits operating correctly in the lab, there are still external
issues to consider before implementation. Noise, load requirements, physical con-
straints, extra feature requirements, and signal compatibility should all be considered
regarding their influence on the controller. Noise may consist of actual signal noise
from the system transducers, connections, etc., but also may be electromagnetic
noise affecting unshielded portions of the circuit, and so forth. Noisy signals may
electronic controllers. Whereas the resistor in the OpAmp circuits passes current in
the range of microamps, the valves or dampers inserted into the mechanical control
circuit will have an associated energy drop and thus generate some additional heat in
the circuit. If the mechanical control elements are small, this might be an insignif-
icant amount of the total energy controlled by the system, but the advantages and
disadvantages must always be considered.
The good news is that whether the system is electrical or mechanical, the same
effects are present from each gain (P, I, and D). The concepts regarding design and
tuning are the sameonly the implementation and actual adjustments tend to differ.
Because of this reason, and since most new controllers are now electronic, only a
brief introduction is presented here about how mechanical control systems can be
implemented.
EXAMPLE 6.1
Design a mechanical feedback system to control the position of a hydraulic cylinder.
Develop the block diagram, including an equivalent proportional controller, and the
necessary transfer functions using the model given in Figure 4. Make the following
assumptions to simplify the problem and keep it linear:
The mass of the piston and cylinder rod is negligible.
There is a constant supply pressure, Ps .
Flow through the valve is proportional to valve movement, x.
The coefficient Kv accounts for the pressure drop across both orifices (flow
paths in and out of the valve).
Flow equals the area of the piston times the piston velocity.
The fluid is incompressible.
Notation: r input, y output.
First, write the equation representing the input command to the valve, x, as a
function of the command input, r, and the system feedback, z. This should look
familiar as our summing junction (with scaling factors) whose output is the error
between the desired and actual positions.
a b
x r z
ab ab
Now, sum the forces on the mass and develop the transfer function between y and z:
X
F M z y zK y_ z_B
Take the Laplace transform of the equation:
M s2 Zs K Ys K Zs B s Ys B s Zs
And then write as the transfer function between Zs and Ys:
Zs Bs K
Ys Ms2 Bs K
Finally, relate the piston movement to the linearized valve spool movement where
the flow rate through the valve is assumed to be proportional to the valve position.
This simplification does ignore the pressure-flow relationship that exists in the valve
(see Sec. 12.4). The law of continuity (assuming no leakage in the system) relates the
valve flow to the cylinder velocity.
Q A dy=dt Kv x
dy=dt Kv =Ax
Take the Laplace transform and develop the transfer function between Ys and
Xs.
Ys KV 1
Xs A s
Now the block diagram can be constructed as shown in Figure 5. Recognize that if
we desired to have Zs as our output, the block diagram could be rearranged to
make this the case and Ys would be an intermediate variable in the forward path.
To change the gain in such a system, we now must physically adjust pivot points,
valve opening sizes, piston areas, etc., to tune the system. In this particular example
the linkage lengths allow us to adjust the proportional gain in the system.
At this point the design tools presented in previous chapters can be used to
choose the desired gains that lead to the proper linkages, springs, and dampers.
Although the systems in general have a more limited tuning range, they are imper-
vious to electrical noise and interference, making them very attractive in some
industrial settings. They also do not depend on electrical power and provide for
addition mobility and reliability, especially in hazardous or harsh environments.
Among the disadvantages, we see that to change the type of controller, we must
actually change physical components in our system. Also, as mentioned earlier,
whereas with electrical controllers we can add effective damping without increasing
the actual energy losses, in mechanical (hydraulic, or pneumatic, also being consid-
ered mechanical) systems we actually increase the energy dissipation in the system
to increase the system damping.
6.4 TRANSDUCERS
Sensors are key elements for designing a successful control system and, in many
cases, the limiting component. If a sensor is either unavailable or too expensive,
the control of the desired variable becomes very difficult. Sensors, by definition,
produce an output signal relative to some physical phenomenon. The term is derived
from the Latin word sensus, as used to describe our senses or how we receive
information from our physical surroundings. Transducer, a term commonly used
interchangeably with sensor, is generally defined to cover a wider range of activities.
A transducer is used to convert a physical signal into a corresponding signal of
different form, usually to a form readily used by analog controllers. The Latin
word transducer simply means to transfer, or convert, something from one form
to another. Thus a sensor is also a transducer, but not vice versa. Some transducers
might just convert from one signal type to another, never sensing a physical
phenomenon. We will assume the transducers described here include a sensor to
obtain the original output change from a physical phenomenon change. Only trans-
ducers dealing with analog signals are presented here (see Sec. 10.7 for a similar
discussion on digital transducers).
Strain gages
Piezoelectric materials
Capacitive devices
288 Chapter 6
Range The input range describes the acceptable values on the input, i.e.,
01000 psi, 010 cm, and so forth. The output range determines the
type and level of signal output. If your data acquisition system only
handles 05 V, then a transducer whose output is 10 V would be
more difficult to interface. Current output signals are also becoming
more popular and are discussed more in later sections. Many
transducers and controller cards have user selectable ranges.
Error These ratings are commonly broken into several categories. Sensitivity,
hysteresis, linearity, and repeatability are all components of error that
will degrade your accuracy. High precision means high repeatability
but not necessarily high accuracy.
Stability The amount of signal drift as a function of time. The drift may be
related to the transducer warming up and thus diminishing once the
temperature is stable.
Dynamics This should be specified as in earlier sections using terms like response
time, time constant, rise time, and/or settling time. These are important
if we are trying to control a relatively fast system where the transducer
might not be fast enough to measure our variable of interest.
They all do credible jobs and are readily available. Strain gage types measure the
strain (deflection) caused by the pressure acting on a plate in the transducer.
Piezoelectric devices use the pressure to deform the piezoelectric material, producing
a small electrical signal. Finally, capacitive devices measure the capacitance change
as the pressure forces two plates closer together. With each type, there are generally
three pressure ratings. The normal range where output is proportional to the input is
where the transducer should be used. Two failure ratings are also relevant. The first
failure point is where the measurement device is internally damaged (diaphragm is
deformed, etc.) and the transducer is no longer useful. The final failure point, and the
most severe, is the burst pressure rating. It is dangerous to exceed this rating.
Pressure transducers are common, and thus all types come in a variety of
voltage and current outputs. Common voltage ranges include 010, 10, 01, and
05 V. The most common current output range is 420 mA and is discussed in
Section 6.6 relative to the noise rejection advantages of using current signals.
Many transducers now have the signal conditioning electronics mounted inside the
transducer for a compact unit that is easy to use and install. An example of this type
is shown in Figure 6. Signal conditioning is required for most transducers (not just
pressure) since the sensor output (i.e., strain gage) is very small and must be ampli-
fied. The sooner that this occurs, the better our signal-to-noise ratio is for the
remainder of the system.
Finally, response times of most pressure transducers are very fast relative to the
types of systems installed on cylinders/motors with large inertia. Response time
may be a concern when attempting to measure higher order dynamics (fluid
dynamics, etc.) in the system. Also, since the accuracy of most transducers is depen-
dent on the transducer range, sometimes it is necessary to use differential pressure
Analog Control System Components 289
transducers. These transducers can measure small differences between two pressures
even though both pressures are very large. For example, it is hard to measure small
changes in a very large pressure using a transducer designed to output a linear signal
from low pressures all the way up to high pressures. The available output resolution
will be spread over a much larger range.
6.4.2.2 Common Flow Meters
Flow meters have been the larger problem of the two, and accuracy and response
time are more often questionable. Flow is more difficult to measure for several
reasons. Turbulent and laminar flow regimes, a logarithmic dependence of viscosity
on temperature, and superimposed pressure effects all make the measurement more
difficult. Most precision flow meters are of the turbine type, where the fluid passes
through a turbine whose velocity is measured. An example is shown in Figure 7.
Once the turbulent regime is well established, many meters are fairly linear and
capable of 0:1% accuracy. To take advantage of the transition regions, higher
order curve fits must be used, sometimes a different curve fit for each region of
operation. In addition, care must be taken when using in reverse since the calibration
factors are commonly quite different. As higher accuracies are required, temperature
and pressure corrections may also be required. For smaller flows and high precision
measurements, some positive displacement meters have been designed for use in
several specialty applications (medical, etc.).
Other flow meters include ultrasonic, laser, and electromagnetic devices; strain
gage devices; and orifice pressure differentials. Ultrasonic flow meters pass high
frequency sound waves through the fluid and measure the transmit time. It does
require additional circuitry to process the signals. Laser doppler effects may be used
to measure flow in transparent channels by measuring the scatter of the laser beam
using Doppler techniques. Electromagnetic devices place a magnet on two sides of
the channel and measure the voltage on the perpendicular sides. The voltage is
proportional to the rate at which the fields are cut and thus to the velocity of the
fluid. Strain gage devices are used to measure the deflection of a ram inserted into the
flow path to measure flow rate. Their main advantage is potentially better response
times relative to the other methods.
Finally, simply measuring the pressure of each side of a known orifice allows a
flow to be measured, as shown in Figure 8. It does tend to be quite nonlinear outside
of the calibrated ranges but is commonly used to sense flow in mechanical feedback
components such as flow control valves. It creates a design compromise between
resolution and allowable pressure drops.
There are many variations that have been developed for different applications.
Using Bernoullis equation allows us to solve for the pressure drop as a function of
the flow since we know the flow into the meter equals flow out of the meter. In
general, the flow will be proportional to the square root of the pressure drop.
can be accomplished using a single OpAmp, capacitor, and resistor but will likely
require additional components to combat noise problems. The simplest analog linear
velocity sensor is accomplished by moving magnets past coils to generate a voltage
signal. Displacement ranges tend to be quite small, on the order of 50 mm.
The magnetostrictive technology also develops velocity signals through on
board circuitry. This technology has been described above.
Many of the available rotary position sensors have digital output signals.
Optical angle encoders, hall effect sensors, and photodiodes are examples. With
additional circuitry it is possible to convert some to compatible analog signals.
6.4.4.2 Rotary Velocity Transducers/Sensors
Magnetic pickup: Magnetic pickups are common, cheap, and easy to install.
Any ferrous material that passes by the magnet will produce a voltage in the
magnets coil. Although the output is a sinusoidal wave varying in frequency
and amplitude, it is easily converted to an analog voltage proportional to
speed using an integrated circuit. The benefit of this signal is that the fre-
quency is directly proportional to shaft speed and fairly immune to noise (at
normal levels). There are several frequency to voltage converters containing
charge pumps and conditioning circuits integrated directly into single-chip
packages. If a direct readout is desired, any frequency meter can be used
without any additional circuitry (unless protective circuits are desired). The
disadvantage is at low speed where the signal gets too small to accurately
measure. Through the appropriate signal conditioning (Schmitt triggers,
etc.), a magnetic pickup may be used to provide a digital signal. These topics
are covered in greater detail in Chapter 10. Also, optical encoders and other
digital devices may be used in conjunction with the frequency to voltage
converter chip but with the same limitations as with the magnetic pickup.
D.C. tachometer/generator: Another common component used to measure
rotary velocity is the DC tachometer. It is simply a direct current generator
whose output voltage is proportional to the shaft speed. An advantage is that
it does not require any additional circuitry or external power to operate; a
simple voltage meter can be calibrated to rotary speed and little or no signal
conditioning is required. A disadvantage, however, is that it does require a
contact surface, for example, a contact wheel or drive belt, to operate, which
will add some additional friction to the system when installed.
measurement devices use the Hall effect to measure a turbine blade passing
by. Hall effect devices have several advantages and disadvantages when
compared to magnetic pickups. Whereas magnetic pickups have a signal
that becomes very small at lower speeds (past the magnet), Hall effect devices
do not need a minimum speed to generate a signal; the presence of a mag-
netic field causes the output (voltage) to change. This allows them to be used
as proximity sensors, displacement transducers (quite nonlinear), in addition
to speed sensors. The disadvantages are that they require an external power
source, a magnet on the moving piece, and signal conditioning.
Strain gages: Already mentioned relative their use as embedded in other trans-
ducers, these are very common devices used to measure strain and then
calibrated for acceleration, force, pressure, etc. The resistance in the strain
gage changes by small amounts when the material is stretched or com-
pressed. Thus, the output voltage is very small and an amplifier (bridge
circuit) is required for normal use.
Temperature: Several common temperature transducers are bimetallic strips
(toaster ovens), resistance-temperature-detectors (RTDs), thermistors, and
thermocouples. The bimetallic strip simple bends when heated due to dis-
similar material expansion rates and can be used as safety devices or tem-
perature dependent switches. RTDs use the fact that most metals will have
an increase in resistance when temperature is increased. They are stable
devices but require signal amplification for normal use. Thermistors have
a resistance that decreases nonlinearly with temperature but are very rugged,
small, and quick to respond to temperature changes. They exhibit larger
resistance changes but at the expense of being quite nonlinear.
Thermocouples are very common and can be chosen according to letter
codes. They produce a small voltage between two different metals based
on the temperature difference.
6.5 ACTUATORS
Actuators are critical to system performance and must be carefully chosen to avoid
saturation while maintaining response time and limiting cost. Many specific actua-
tors are available in each field, and this section only serves to provide a quick over-
view of the common actuators used in a variety of systems. To emphasize an
underlying theme of this entire text, we must remember that no matter what our
controller output tells our system to do, unless we are physically capable of moving
the system as commanded, all is for naught. The performance limits (physics) of the
system are not going to be changed as a result of adding a controller. For this reason
296 Chapter 6
the importance of choosing the correct amplifiers and actuators cannot be over-
stated.
It should also be noted that most actuators relate the generalized effort and
flow variables, defined in Chapter 2, to the corresponding input and output. For
example, cylinder force is proportional to pressure (the output and input efforts) and
the cylinder velocity is proportional to the volumetric flow rate (the output and input
flows). The same relationship is true for the hydraulic motors. An exception occurs
in solenoids and electric motors where the force is proportional to the current (out-
put effort relates to input flow).
generated is simply equal to pressure multiplied by area. Although the basic equa-
tions for force and flow rates with respect to cylinders are very common, they are
presented here for review. A basic cylinder can be described as in Figure 13. The bore
and rod diameters are often specified, allowing calculation of the respective areas. It
is helpful to define several items:
Diabore Diameter of the cylinder bore
Diarod Diameter of the cylinder rod
ABE Area of 2
the bore (cap) end
ABE Dia4bore
ARE Area of the rod end
ARE 4 Dia2bore Dia2rod
PBE Pressure acting on the bore end
PRE Pressure acting on the rod end
The flow and force equations are desired in the final modeling since the correspond-
ing valve characteristics correspond to them. Flow, Q, rates in and out of the cylin-
der are given by the following equations:
dPBE
QBE v ABE CBE
dt
dP
QRE v ARE CRE RE
dt
If compressibility, C, is ignored or only steady-state characteristics examined, the
capacitance terms are zero and the flow rate is simply the area times the velocity, v,
for each side of the cylinder. It is important to note that the flows are not equal with
single-ended cylinders as shown above. For many cases where the compressibility
flows are negligible, the flows are simply related through the inverse ratio of their
respective cross-sectional areas. In pneumatic systems the compressibility cannot be
ignored and constitutes a significant portion of the total flow. If compressibility is
ignored, the ratio is easily found by setting the two velocity terms equal, as they
share the same piston and rod movement.
ABE
QBE Q
ARE RE
The cylinder force also plays an important role in system performance. In steady-
state operation, only the kinematics of the linkage will change the required force
since the acceleration on the cylinder is assumed to be zero. In many systems the
acceleration phase is quite short as compared to the total length of travel. The
dv d2x
PBE ABE PRE ARE FL m m 2
dt dt
material will deform. Very small motions limit their usefulness but they may operate
at frequencies in the MHz range.
A constant is generally necessary depending on the units being used. For example, if
Q is in GPM, DM in in3 =s, N in rpm, T in ft-lb, and P in psi, then
DM N DM P
QIdeal TIdeal
231 24
WIn WOut
The hydraulic input power is thenWIn PQ: The power out is mechanical and
Wout TDM . Unfortunately, losses do occur, and it is convenient to model the
resulting efficiencies in two basic forms, volumetric and mechanical. While still
remaining a simple model, the following efficiencies are defined:
TorqueActual
Mechanical efficiency: tm
TorqueTheoretical
FlowTheoretical
Volumetric efficiency: vm
FlowActual
Summarizing, the ideal hydraulic motor acts as if the output speed is propor-
tional to flow and torque proportional to pressure. In reality, more flow is required
and less torque obtained than what the equations predict. This can be simply mod-
eled using mechanical and volumetric efficiencies. In an ideal motor, the power in
equals the power out. In an actual system with losses, the product of the mechanical
and volumetric efficiencies provides the ratio of power out to power in for the motor
since the following is true:
T DM N TN WOut
Overall efficiency oa tm vm
DM P Q PQ WIn
Many systems use hydraulic motors as actuators. Conveyor belt drives and
hydrostatic transmissions are examples found in a variety of applications. Several
advantages are good controllability, reliability, heat removal and lubrication inher-
ent in the fluid, and the ability to stall the systems without damage.
6.5.2.2 DC Motors
DC motors are constructed using both permanent magnets and electromagnets,
which are further classified as series, combination, or shunt wound. In the typical
DC motor, coils of wire are mounted on the armature, which rotates because of the
magnetic field (whether from a permanent magnet or electromagnet). To achieve a
continuous rotation and to minimize torque peaks, multiple poles are used and a
commutator reverses the current in sequential coils as the motor rotates. The down-
side of such an arrangement is that we have sliding contacts prone to fail over time
and the brushes must be replaced on regular intervals. A typical DC motor in shown
in Figure 15. The field poles may be generated using either permanent magnets or
electromagnets. Permanent magnet motors do not require a separate voltage source
for the field voltage, resulting in higher efficiencies and less heat generation.
Motors that use separate windings to generate the magnetic field (electromag-
nets) provide more constant field excitation levels, allowing smoother control of
motor speed. In both cases the torque is generally proportional to current input
(and the magnetic flux, which is commonly assumed to be relatively constant over
the desired operating range). The back electromotive force (emf, or voltage drop) is
proportional to shaft speed.
Torque: T Kt I
Voltage: V RI Kv !
T is the output torque, I is the input current, V is the voltage drop across the motor,
R is the resistance in the windings, and ! is the output shaft speed. The constants Kt
and Kv are commonly referred to as the torque and voltage constants, respectively.
When using electromagnets to generate the elds, we have several options on
how to wire the armature and eld, commonly termed shunt or series wound electric
motors. To compromise between the properties of both types, we may also use
combinations of shunt and series windings, giving performance curves as shown in
Figure 16.
Shunt motors, which have the armature and eld coils connected in parallel,
are more widely used because of better starting torque, lower no-load speeds, and
good speed regulation regardless of load. The primary disadvantage is the lower
startup torque as compared to series wound motors. Series wound motors, although
they have a higher starting torque due to having the armature and eld coils in series,
will decrease in speed as the load is increased, although this may be helpful in some
applications. Combination wound motors that have some pairs of armature and eld
coils in series and some in parallel try to achieve a good startup torque and decent
speed regulation.
The speed of DC motors can be controlled by changing the armature current
(more common) or the eld current. Armature current control provides good speed
regulation but requires that the full input power to the armature be controlled. One
method of interfacing with digital components is using pulse-width modulation to
control the current (see Sec. 10.9).
Stepper motors: Stepper motors are covered in more detail in the digital section
(see Sec. 10.8) since special digital signals and circuit components are
required to use them. They are readily available and can be operated open
loop if the forces are small enough to always allow a step to occur. The
output is discrete with the resolution depending on the type chosen.
6.5.2.3 AC Motors
AC motors have a signicant advantage over DC motors since the AC current
provides the required eld reversal for rotation. This allows them to be cheaper,
more reliable, and maintenance free. The primary disadvantage is that it xes the
operating speeds unless additional (expensive) circuitry is added. Generally classied
as being one of two major types, single phase or multiphase, they are further classi-
ed within each category either of induction or synchronous types. Induction AC
motors have windings on the rotor but no external connections. When a magnetic
eld is produced in the stator it induces current in the rotor. Synchronous AC
motors use permanent magnets in the rotor and the rotor follows the magnetic
eld produced in the stator.
Single-phase induction motors do not require external connections to the rotor
and the AC current is used to automatically reverse the current in the stator wind-
ings. Because it is single phase it is not always self-starting and a variety of methods
is used to initially begin the rotation. Once started the motor rotates at a velocity
determined by the frequency of the AC signal. There is, however, some slip always
present and the motor actually spins at speeds 13% less than the synchronous
speed.
A three-phase induction motor is similar to the single-phase type except that
the stator has three windings, each 120 degrees apart. The motor now becomes self-
starting since there is never a part of the rotation where the net torque becomes zero
(as in the single phase motors). Another advantage is that the torque becomes much
smoother, similar in concept to adding more cylinders in an internal combustion (IC)
engine. A primary problem of induction motors is that they require a vector drive to
operate as servomotors and additional cooling and calibration are required for
satisfactory performance.
Synchronous motors have a very controlled speed but are not self-starting and
need special circuits to start them. Multiple-phase motors are usually chosen over
single phase when the power requirements are high.
To achieve good speed regulation in AC motors, we must now add special
electronics since the motor speed is related to the frequency of the AC signal.
Whereas we control the speed in DC motors by adjusting the voltage (current), in
AC motors we now must adjust the frequency of the input. A common method of
achieving this is to convert the AC power input to DC and then use a DC-to-AC
converter to output a variable frequency AC signal. Very good speed regulation is
achieved with this method (sometimes by closing the loop on speed), and although
the prices continually fall, it still remains more expensive.
6.6 AMPLIFIERS
Two main types of ampliers are discussed in this section: signal ampliers and
power ampliers. Signal ampliers, such as an OpAmp, are designed to amplify
Analog Control System Components 303
the signal (i.e., voltage) level but not the power. Power ampliers, on the other hand,
may or may not increase the signal level but are expected to signicantly increase the
power level. Thus in many control systems there are both signal ampliers and power
ampliers, sometimes connected in series to accomplish the task of controlling the
system. Each type encounters unique problems, with signal ampliers generally being
susceptible to noise and power ampliers to heat generation (and thus required
dissipation). This section introduces several common methods as found in many
typical control systems.
Assuming a high input impedance and no current ow into the OpAmp leads
to a gain for the circuit of
Vout R
Inverting gain: 2
Vin R1
Remember that the output voltage is limited and only valid input ranges will exhibit
the desired gain. If a noninverting amplier is required, we can use the circuit given
in Figure 19.
The gain of the noninverting OpAmp circuit is derived as
Vout R1 R2
Noninverting gain:
Vin R1
As mentioned, many additional functions are derived from these basic circuits. An
example list is given here:
Integrating amplier: replace R2 with a capacitor
Dierentiating amplier: replace R1 with a capacitor
Summing amplier: replace R1 with a separate resistor in parallel for each
input
In addition to the summing junction (error detector) from Table 1, one other
circuit should be mentioned relative to constructing an analog controller, a com-
parator. A comparator simply has the two inputs each connected to a signal and no
feedback or resistors in the circuit. Such an arrangement has the property of satur-
ating high if the positive input is greater than the negative input or vice versa if
reversed. This allows the OpAmp to be used as a simple on-o controller, similar in
vibration, fairly ecient, small and light, and economical. Their primary disadvan-
tage is found in their sensitivity to temperature. This is the primary reason for using
switching techniques like PWM since it signicantly minimizes the heat generation in
transistors. When a transistor is used as linear amplier it must be designed to
dissipate much greater levels of internal heat generation. This is primarily because
it is asked to drop much larger voltage and current levels internally as compared to
when operated as a solid-state switching device. The design of practical linear ampli-
ers is beyond the scope of this text, and many references more fully address this
topic. The design of solid-state switches is given in Section 10.9.
and still maintain good signal to noise ratios. In general, the devices that benet from
compensation techniques will already have it included when we purchase it for our
use in control systems.
6.6.3.2 Filtering
Filtering is commonly added at various locations in a control system to remove
unwanted frequency components of our signal. Filters may be designed into the
amplier or added by us as we design the system. Three common types of lters,
shown in Figure 20, are low pass, high pass, and band pass. Low-pass lters are
designed to allow only low frequency components of the signal to pass through. Any
higher frequency components are attenuated. High-pass lters are designed to allow
only high frequency components of the signal through, and band-pass lters only
allow a specied range of frequencies through.
When designing a lter we can apply our terminology learned with Bode plots
to design and describe the performance. From our Bode plot discussions we recall
that a rst-order denominator attenuates high frequencies at a rate of 20 dB/dec-
ade. In lter terminology it is common to refer to the number of poles that the lter
has. Thus, if we have a four-pole lter, it will attenuate at a rate of 80 dB/decade.
Even with higher pole lters we do not achieve instantaneous attenuation of signals.
It is interesting to note that the lters illustrated in Figure 20 look similar to several
of the compensators designed.
Designing basic lters using Bode plots is quite simple. For example, if we
connect a resistor in series and a capacitor in parallel with out signal, we have just
added a single pole passive lter to the system. The analysis is identical to the
techniques learned earlier where we found a time constant of RC and a transfer
function with one pole in the denominator. The Bode plot then has a low frequency
horizontal asymptote, a break frequency at 1=, and a high frequency attenuation
slope of 20 dB/decade. Comparing this to Figure 20, we see that it is a simple low-
pass lter. To achieve sharper cut-o rates we would add more poles to the lter. A
band-pass lter can be designed following the same procedures except that we add a
rst-order term (zero) in the numerator followed by two rst-order terms in the
denominator dening the cut-o frequencies.
Along with the performance descriptions above, we further distinguish lters as
active or passive. The simple RC lter is a passive lter since it requires no external
power and draws its power from the signal. This has the disadvantage of sometimes
changing the actual signal, especially as we implement passive lters with more poles.
To overcome this and add lters with high input impedances (thus drawing no
current from the signal), we use active lters. Active lters require a separate power
source but allow for greater performance. OpAmps are commonly used and provide
the high input impedance desired by lters. Also available are IC chips with active
lters designed on the integrated circuits.
6.6.3.3 Advantages of Current-Driven Signals
Most transducers and many controllers now have options allowing us to use current
signals instead of voltage signals. This section quickly discusses some the advantages
of using current signals and how to interface them with standard components expect-
ing voltage inputs.
The primary advantage is easily illustrated using the eort and ow modeling
analogies from Chapter 2. Voltage is our electrical eort and current is our electrical
ow variable. Using the analogy of our garden hose, we know that if we have a xed
ow rate entering at one end, then the same ow will exit at other end, regardless of
what pressure drops occur along the length of the hose (assuming no leakage or
compressibility). Thus, our ow is not aected by imposed disturbances (eort
noises) acting on the system. In the same way, even if external noise is added to
our electrical current signal as voltage spikes, the current signal remains constant,
even though the voltage level of it picks up the noise. The advantage becomes even
more pronounced as we require longer wire lengths through noisy electrical loca-
tions. Although it is possible to also induce currents in our signal wires (magnet
moving by a coil of wire), it is much more likely that the noise is seen as a voltage
change. Thus, our primary concern in using current signals is that our transducer (or
whatever is driving our current signal) is capable of producing a constant, well
regulated, current signal in the presence of changing load impedances.
Even if our signal target requires voltage (i.e., an existing AD converter chip),
we can still take advantage of the noise immunization of current signals by transfer-
ring our signal as a current and converting it to a voltage at the voltage input itself.
This is easily accomplished by dropping the current over a high precision resistor
placed across the voltage input terminals, as shown in Figure 21.
Only two wires are needed to implement the transducer, and if desired a
common ground can be used. Most transducers will give the allowable resistance
(impedance) range where it is able to regulate the current output. Recognize that
with a current signal we no longer will get negative voltage signals and, in fact, do
not reach zero voltage. The voltage measurement range is found by taking the lowest
and highest current output (usually 420 mA) and multiplying them by the resistance
value.
6.7 PROBLEMS
6.1 Briey describe the role a typical amplier in a control system.
6.2 Briey describe the role a typical actuator in a control system.
6.3 An actuator must be able to . . . (nish the statement).
6.4 What is the advantage of using an approximate derivative compensator?
6.5 List several possible sources of electrical noise as aecting the control system
signals.
6.6 Describe an advantage and disadvantage of mechanical feedback control sys-
tems.
6.7 What is the importance of the transducer in a typical control system?
6.8 List two desirable characteristics of transducers and briey describe each one.
6.9 What are three important pressure ratings for pressure transducers?
6.10 List three types of transducers that may use a strain gage as the sensor.
6.11 Liquid ow meters are analogous to meters in electrical systems.
6.12 What are two types of noncontact linear position transducers?
6.13 Why is a velocity transducer desirable over manipulating the position feedback
signal to obtain a velocity signal?
6.14 List one advantage and disadvantage of the common magnetic pickup relative
to measuring angular velocity.
6.15 Hydraulic cylinders might be the linear actuator of choice when what charac-
teristics in an actuator are needed?
6.16 Locate an electrical solenoid in a product that you currently use and describe
its function in the system.
6.17 Name two methods of controlling the speed in a DC motor.
6.18 Brushless DC motors have what advantages over conventional DC motors?
6.19 All AC motors are self starting. True or False?
6.20 What are the advantages and disadvantages of AC motors as compared with
DC motors?
6.21 What are two major types of ampliers?
6.22 Why is high input impedance desirable for an amplier?
6.23 Name the common electrical component used in electrical power linear ampli-
ers.
6.24 Why is the signal to noise ratio an important consideration during the design of
a control system?
6.25 Passive lters require a separate power source. True or False?
6.26 Under what conditions will current signals perform much better than voltage
signals?
6.27 Construct a speed control system for the system in Figure 22. The system to be
controlled is a conveyor belt carrying boxes from the lling station to the taping
station. It must run at a constant speed regardless of the number and weight of boxes
placed on it. Build the model in block diagram form, where each block represents a
simple physical component. Details of each block are not required, just what com-
ponent you are using (i.e., an block which requires an actuator might use a DC
motor or a solenoid) and where. In addition, attach consistent units to each line
connecting the blocks. Note: Label each block and line clearly. Include all required
components. For example, certain components require power supplies, such as some
transducers, etc. Number each block and dene: category (transducer, actuator, or
amplier) type (LVDT, OpAmp, etc.), inputs, outputs, and additional support compo-
nents (power supplies, converters, etc.).
6.28 Design a closed loop position control system for the system in Figure 23. The
system to be controlled is a single-axis welding robot. A high force position actuator
is required to move the heavy robot arm. Build the model in block diagram form,
where each block represents a simple physical component. Attach consistent units to
each line. Note: Label each block and line clearly. Include all required components. For
example, certain components require power supplies, such as some transducers, etc.
Number each block and dene: category (transducer, actuator, or amplier) type
(LVDT, OpAmp, etc.), inputs, outputs, and additional support components (power
supplies, converters, etc.).
7
Digital Control Systems
7.1 OBJECTIVES
Introduce the common congurations of digital control systems.
Compare analog and digital controllers.
Review digital control theory and its relationship to continuous systems.
Examine the eects and models of sampling.
Develop the skills to design digital controllers.
7.2 INTRODUCTION
It seems that every several years the advances in computer processing power make
past gains seem minor. As a result of this cheap processing power available to
engineers designing control systems, advanced controller algorithms have grown
tremendously. The space shuttle, military jets, general airline transport planes,
along with our common automobile have beneted from these advancements.
Things are being done today that were once thought impossible due to modern
controllers. The modern military ghter jet would be impossible to y without the
help of the onboard electronic control system.
In this chapter we begin to develop the skills necessary for designing and
implementing advanced controllers. Since virtually all controllers at this level are
implemented using digital microprocessors, we spend some time developing the
models, typical congurations, and tools for analysis. When we compare analog
and digital controllers, we notice two big dierences: Digital devices have limited
knowledge of the system (data only at each sample time) and limited resolution when
measuring changes in analog signals. There are many advantages, however, that tend
to shift the scales in favor of digital controllers. An innite number of designs,
advanced (adaptive and learning) controller algorithms, better noise rejection with
some digital signals, communication between controllers, and cheaper controllers are
now all feasible options. To simulate and design digital controllers, we introduce a
new transform, the z transform, allowing us to include the eects of digitizing and
sampling our signals.
311
312 Chapter 7
can be handled by the computer and used to control several processes. As Figures 2
and 3 illustrate, computer-based controllers may be congured as centralized or
distributed control congurations. Many times combinations of these two are used
for control of large complex systems.
In centralized schemes, the digital computer handles all of the inputs, processes
all errors, and generates all of the outputs depending on those errors. It has some
advantages since only one computer is needed, and because it monitors all signals, it
is able to recognize and adapt to coupling between systems. Thus if one system
changes, it might change its control algorithm for another system which exhibits
coupling with the rst. Also, simply reprogramming one computer may change the
dynamic characteristics of several systems. The disadvantages include being depen-
dent on one processor, limited performance with large systems since the processor is
being used to operate many controllers, and component controller upgrades are
more dicult.
The distributed controller falls on the opposite end of the spectrum where every
subsystem has its own controller. Advantages are that it is easy to upgrade one
specic controller, easier to include redundant systems, and lower performance pro-
cessors may be used. It may or may not cost more, depending on each system. Since
simple, possibly analog, controllers can be used for some of the individual subsys-
tems, both analog and digital controllers can coexist and it is possible to sometimes
save money. The primary computer is generally responsible for determining opti-
mum operating points for each subsystem and sending the appropriate command
signals to each controller. Depending on the stability of the individual controllers,
the primary computer may or may not record/use the feedback from individual
systems.
For many complex systems the best alternative becomes a combination of
centralized and distributed controllers. If a subsystem has a well-developed and
cost-eective solution, it is often better to ooad that task from the primary con-
troller to free it for others. If a complex or adaptive routine is required, such as
dealing with coupling between systems, then the central computer might best serve
those systems. In this way our systems can be optimized from both cost and perfor-
mance perspectives. The mnemonic commonly used to describe these systems,
SCADA, stands for Supervisory Control and Data Acquisition. A PC (or program-
mable logic controller with a processor) in this case provides the supervisory control
with multiple distributed controllers through either half or full duplex modes. Half
duplex means the supervisory initiates all requests and changes and the distributed
components respond but do not initiate contact. The advantage of these systems is
that the link may be through wires, radio wave (even satellite), Internet, etc.
As we see in the next section, adding these capabilities and the input and
output interfaces changes our model and new techniques must be used.
Fortunately, the new technique can be understood in much of the same way as
the analog techniques, but with the addition of another variable, the sample time.
Advantages Disadvantages
the modern microcomputer became available. The early 1970s saw prices up to
$100,000 for complete systems. By 1980 the price had fallen to $500 with quantity
prices as low as $50 for small processors. The 1990s have seen prices fall to only a few
dollars per microprocessor. Complete systems are aordable to companies of all
sizes and have allowed the use of digital controllers to become the standard.
Virtually all automobile, aviation, home appliance, and heavy equipment controllers
are microprocessor based and digital. Programmable logic controllers (PLCs), intro-
duced in the 1970s, have become commonplace, and prices continue to fall. More
PC-based applications are also found as prices also have decreased signicantly.
A danger in this is when we simply take existing analog control systems and
implement them digitally without understanding the dierences. Not only will we be
more likely to have problems and be unsatised with the results, but we also miss out
on the new opportunities that are available once we switch to digital. This chapter,
and the several following this, attempts to connect what we have learned about
analog systems with what we can expect when moving towards digital implementa-
tions.
components to achieve our goals. The design methods presented in this text are all
based on this initial assumption.
Now, in addition to proper physical system design, we need to account for the
digital components. The problem arises in modeling the interface between the digital
and analog systems and its dependence on sample time. As we will see, we might
design a wonderful controller based on one sample time, have someone else also
claim processor time, resulting in more processor tasks per sample and longer sam-
pling periods. Consequently, we now have stability problems based on the longer
sample time even though our controller itself never changed. An example is with
automobile microprocessors where new features, not initially planned on, are con-
tinually added until the microprocessor can no longer achieve the performance the
initial designer was planning on.
This leads to two basic approaches for designing digital controllers: we can
convert (or design in the continuous domain and then convert) continuous based
controllers into approximate digital controllers or we can begin immediately in the
discrete domain and design our controller using tools developed for designing digital
controllers. Both methods have several strengths and weakness, as discussed in the
next section. Chapter 9 will present the actual methods and examples of each type.
7.3.2.1 Designing from Continuous System Methods
One common approach to designing digital controllers is to design the controller in
the continuous domain as taught in the previous chapters, and once the controller is
designed, use one of several transformations to convert it to a digital controller. For
example, a common proportional, integral, derivative (PID) controller can be
designed in the continuous domain and approximated using nite dierences for
the integration and derivative functions. Additionally, the bilinear transformation
(Tustins method), or the impulse invariant method, may be used to convert from the
s-domain to the z-domain. The z-domain is introduced later as a digital alternative to
the s-domain that allows us to use the skills we have already developed. Thus if you
are familiar with controller design using classical techniques, with a little work you
can design controllers in the digital domain. Finally, a simple technique of matching
the poles of zeros of the continuous controller with equivalent poles and zeros in the
z-domain may be used, aptly called the pole-zero matching method.
There are several advantages to beginning with (or using) the continuous sys-
tem. It is in fact how our physical system responds; there are many tools, textbooks,
and examples to choose from; and most people feel more comfortable working with
real components. As the next section shows, there are also several disadvantages
with this method.
7.3.2.2 Direct Design of Digital Controllers
One primary advantage of designing controllers directly in the digital domain is that
it allows us to take advantage of features unavailable in the continuous domain.
Since digital controllers are not subject to physical constraints (with respect to
developing controller output as compared to using OpAmps, linkages, etc.), new
methods are available with unique performance results. Direct design allows us to
actually choose a desired response (it still must be a feasible one that the physics of
the system are capable of) and design a controller for that response. There are fewer
limitations since the controller is not physically constructed with real components. It
Digital Control Systems 317
will become clear, however, as we progress that there still are some limitations, most
being unique to digital controllers.
Also building upon our continuous system design skills are root locus techni-
ques that have been developed for digital systems. The concepts are the same except
that we now work in the z-domain, an oshoot of the s-domain. Root locus tech-
niques in the z-domain may be used to directly design for certain response charac-
teristics just as learned for continuous systems. A primary dierence is that now the
sample time also plays a role in the type of response that we achieve. Also, dead-beat
design can be used to make the closed loop transfer function equal to an arbitrarily
chosen value since any algorithm is theoretically possible. Dead-beat designs settle to
zero error after a specied number of sample times. Of course, as mentioned pre-
viously, physical limitations are still placed on the actuators and system components
and one of the costs of aggressive performance specications is high power require-
ments and more costly components. Finally, Bode plot methods may be used using a
w transform.
sents that one distinct moment in time. We can think of it being like a switch that is
momentarily closed each time a computer clock sends a pulse. This idea is illustrated
in Figure 4.
If the sample times are not constant, the vertical lines are no longer spaced
evenly and modeling the sampling process becomes very dicult. In addition, it is
possible in Figure 4 that the analog signal had gone below zero and returned to its
normal amplitude in between samples; our reconstructed signal is unable to follow
this. Remember in the switch analogy that the switch is only momentarily closed and
has no knowledge of the signal between samples. Therefore, our reconstructed signal
might look very nice but may be completely wrong. This commonly occurs with
oscillating signals where the sampling process creates additional frequencies, called
aliasing, as shown in Figure 5.
Several methods are used to minimize the eects of aliasing. To avoid aliasing
up front, we can apply the Nyquist frequency criterion. The Nyquist frequency is
dened as one half of the sample frequency and represents the maximum frequency
that can be sampled before additional lower frequencies are created. Only those
frequencies greater than one half of our sample frequency create additional lower
frequency (articial) components.
That being said, however, higher frequencies are always created, called side
bands, as a result of the sampling process. To reduce this problem, it is common to
install antialiasing lters in the input to remove any frequencies greater than one half
of the sample frequency, because once the signal is sampled it is impossible to
separate the aliasing eects from the real data. The problem often occurs that
even though the highest frequency in our system might be 15 Hz, there might be
noise signals at 60 Hz. Thus if our sample rate is less than 120 Hz we will experience
aliasing eects from the noise components, even though our primary frequency is
much lower.
The beat frequency, dened as the dierence between one half the sample
frequency and the highest frequency, might be very small (i.e., 0.1 Hz), which
leads to aliasing eects that look like DC drift unless longer periods of time are
plotted to see the very slow superimposed wave from the aliasing. This eect is seen
in movies where the airplane propeller or tire spokes will seem to rotate much slower,
or even in the reverse direction, than which the actual object rotates at. Since the
movie frames are updated on a regular basis (i.e., sample time) as the object rotates
at dierent speeds, the eects of aliasing are easily seen.
The best solution is a good low-pass lter with a cut-o frequency above the
highest frequency in the signal and below the Nyquist frequency. Many options are
available: passive lters ranging from simple RC circuits (see Sec. 6.6.3.2) to multi-
pole Butterworth or Chebyche lters. Passive lters inject the least amount of
added noise into the signal but will always attenuate the signal to some extent.
Active lters, a good all-around solution, can have sharper cut-os and gains
other than one. Several options are available o the shelf, including switched capa-
citor lters or linear active-lter chips. For best eects, place the lters as close to the
AD converter input as possible and use good shielding and wiring practices from the
beginning of the design.
EXAMPLE 7.1
Use dierence equation approximations to solve for the sampled step response of a
rst-order system having a time constant, . Calculate the result using both a sample
Digital Control Systems 321
time equal to 1/2 of the system time constant and equal to 1/4 of the system time
constant. Compare the approximate results (found at each sample time) to the
theoretical results.
First, let us use the dierence equation from earlier and substitute in our
sample times. This leads to the following two dierence equations.
2 1
For T 1=2 xk xk 1 uk
3 3
4 1
For T 1=4 xk xk 1 uk
5 5
Theoretical xt 1 et=
We know from earlier discussions that at one time constant, and in response to a unit
step input, we will have reached a value of 0.632 (63.2% of nal value). To reach the
time equal to one time constant, we need two samples for case one (T 1=2) and
four samples for case two (T 1=4).
Finally, we can calculate the approximate values at each sample time and
compare them with the theoretical value as shown in Table 2. As we would expect,
with shorter sample times we more accurately approximate the theoretical response
of our system. The same analogy is found in numerical integration routines. As we
will see, there are more accurate approximations available that allow us better results
at the same sample frequency.
Sample No. Actual value Difference eq. Actual value Difference eq.
Instead of numerically approximating the dierential, let us now integrate both sides
to solve for the output x.
dx 1 1
x ut
dt
Take the integral of both sides:
kT
dx 1 kT 1 kT
xk xk 1 u
k1T dt k1T k1T
Now use the trapezoidal rule to approximate each integral using a dierence equa-
tion:
T xk xk 1 T uk uk 1
xk xk 1
2 2
Finally, collect terms and simplify to express the solution as a dierence equation:
2 T T
xk xk 1 uk uk 1
2 T 2 T
Once again we have an approximate solution to the original rst-order dier-
ential equation. The next example problem will compare this method with the results
from the previous section. With these simple dierence equation approximating
integrals and derivatives, we can now develop simple digital control algorithms. In
Section 9.3 we will see how these simple approximations can be used to implement
digital versions of our common PID controller algorithm. This is one of the primary
motivations for this discussion.
The trapezoidal rule, as shown here, is sometimes called the bilinear transform
or Tustins rule, and generally results in better accuracy with the same step size but
requires more computational time each step. The next section outlines a method
using z transforms, similar to Laplace transforms, to model the digital computer
and provide us with another powerful tool to develop and program controller algo-
rithms on digital computers.
EXAMPLE 7.2
Use numerical integration approximations to solve for the sampled step response of
a rst-order system having a time constant, . Calculate the result using both a
sample time equal to 1/2 of the system time constant and equal to 1/4 of the system
time constant. Compare the approximate results (found at each sample time) to the
theoretical results.
First, let us use the dierence equation from earlier and substitute in our
sample times. This leads to the following two dierence equations.
3 1
For T 1=2 : xk xk 1 uk uk 1
5 5
7 1
For T 1:4 : xk xk 1 uk uk 1
9 9
Theoretical xt 1 et=
Digital Control Systems 323
We know from earlier discussions that at one time constant, and in response to a unit
step input, we will have reached a value of 0.632 (63.2% of nal value). To reach the
time equal to one time constant, we need two samples for case one (T 1=2) and
four samples for case two (T 1=4).
Finally, we can calculate the approximate values at each sample time and
compare them with the theoretical value as shown in Table 3. When we compare
these results with those in Table 2 there is a surprising dierence between the two
methods. Although numerical integration with the trapezoidal approximation
requires additional computations, the results are much closer to the correct values,
even at the lower sampling frequencies. At T 1=4 we are within 0.002 of the
correct answer at one system time constant.
While using the methods discussed here to simulate the response of physical
systems are useful in and of itself, our primary benet will be seen once we start
designing and implementing digital control algorithms. Digital computers are very
capable when it comes to working with dierence equations, and by expressing our
desired control strategy as a dierence equation we can easily implement controllers
using microprocessors. With the basic concepts introduced in this section we are
already able to approximate the derivative and integral actions of the common PID
controller. The proportional term is even simpler. While these methods result in
dierence equation approximations, we are still lacking the design tools analogous
to those we learned in the s-domain. The next section introduces one of these com-
mon tools, the z transform, and concludes our discussion of techniques used to
obtain digital algorithms by modeling the computer sampling eects and introducing
the new transform.
7.4.3 z Transforms
The most common tool used to design and simulate digital systems is z transforms.
We will see that although z transforms have many advantages, it is similar to Laplace
transforms in that they only represent linear systems. Nonlinear systems must be
modeled using dierence equations. Remember from the beginning discussion that
the computer instantaneously samples at each clock pulse and thus the continuous
signal is converted into a series of thin pulses with an amplitude equal to the
amplitude of the analog signal at the time the pulse was sent. For analog inputs
this type of model works well since the computer uses each discrete data point as
represented by that one instant in time. In terms of analog outputs, however, when
this signal is sent from the DA converter (analog output), it is fairly useless to the
physical world as a series of innitely thin pulses. Before the physical system has time
to respond, the pulse is already gone. To remedy this, a hold is applied that
maintains the current pulse amplitude value on the output channel until the next
sample is sent. This is seen in Figure 6 where the computer can now approximate a
continuous analog signal by a continuous series of discrete output levels, as opposed
to just pulses.
If we assume that the time to actually acquire the sample (i.e., latch and
unlatch the switch) is very small, we can approximate the pulse train of values
using the impulse function . At the time of the kth sample, the impulse function
is innitely high and thin with an area under the curve of 1. Although this obviously
is not what the computer actually does, the method does approximate the outcome
and, as we will see, allows us to model the sample and hold process. Using the
impulse function allows us to write the sampled pulse train as a summation with
each pulse occurring at the next kth sample.
X
1
f t f kTt kT
k0
t kT is 1 when t kT and 0 whenever t 6 kT
The benet of using the impulse function is seen when we take the Laplace
transform. Since the Laplace transform of an impulse function, , is 1, and the
Laplace transform of a delay of length T is eTs , we can convert the sampled
pulse equation into the s-domain, where
X
1
F s f kTekTs
k0
Finally, if the signal is an output of our digital device, we must include in our
model the fact that we want the signal to remain on the output port until the next
commanded signal level is received. We can model this eect as the sum of two step
inputs, one occurring one sample later (and opposite in sign) to cancel out the rst
step. This is called a hold. If we recognize that with the hold applied each sampled
value will remain until the next, we can model each pulse with the hold applied as a
single pulse, with the total output being the sequence of pulses with width T as
shown in Figure 6. To model the zero-order sample and hold, we can model the
sequence of pulses as one step input followed by another equal and negative step
input applied one sample time later, as shown in Figure 7.
So we see that a zero-order hold (ZOH) in the s-domain can be used to model
the sampling eects of the analog-to-digital converter and that z1 eTs will allow
us to map our models from the s-domain into the z-domain (sampled domain) and
vice versa. The important concept to remember is that when we take a continuous
system model and develop its discrete (sampled) equivalent format, we must also add
a ZOH to model the eect of the components used to send the sampled data. The
ZOH may be used in several forms using the identity z1 eTs :
1 eTs 1 z 11
ZOH 1 z1
s s z s
It is common to include the 1=s as part of the Laplace to z transform and include the
additional (1 z1 ) separately. Since the z transform has been derived from the
Laplace transform, many of the same analysis procedures apply. For example, we
can again develop transfer functions; talk about poles, zeros, and frequency
responses; and analyze stability. However, we must remember that z itself contains
information about the sampling rate of our system since it is dependent on the
sample period, T.
Since z acts as a shift operator, we can directly relate it to the concept of
dierence equations discussed earlier. A transfer function in the z-domain is easily
converted into a dierence equation using the equivalences:
Cz
If z1
Rz
Then Cz z1 Rz or ck rk 1
Or z Cz Rz or ck 1 rk
Conclusion Cz zn Rz or ck rk n
As in Laplace transforms, tables have been developed that allow us to transform
from time to z or s interchangeably. The inverse property also is true and presents us
with yet another method of analyzing systems. To demonstrate the concept, let us
use the table of z transforms in Appendix B to develop a dierence equation for a
rst-order system and compare its sampled output with that obtained by the numer-
ical approximations from the two previous sections.
First-order system:
dx
x ut
dt
Take the Laplace transform and develop the transfer function:
Xs 1
Us s 1
Now we can use the table in Appendix B for the z transform, but remember that we
must rst add our ZOH model to the continuous system transfer function since we
want the sampled output. The ZOH and the rst order transfer function become (the
1=s term is grouped with the continuous system transfer function):
Xz z 1 1 1
Z
Uz z s s 1
Let a 1= to match the tables:
Xz z 1 1 a
Z
Uz z s s a
Take the z transform:
Xz z 1 z1 eaT
Uz z z 1z eaT
After simplication, the result becomes a transfer function in the z-domain where the
actual coecients (zero and pole values) are a function of 1=, or a, and our sample
time, T.
Xz 1 eaT
Uz z eaT
As we will see in subsequent chapters, our pole locations are still used to
evaluate the stability and transient response of our system. The primary dierence
now, as compared with continuous systems, is that the pole locations also change as
a function of our sample time, not just when physical parameters in our system
undergo change.
Digital Control Systems 327
Xz 1 eaT z1
Uz 1 eaT z1
Now cross-multiply:
Now we can use z1 as a shift operator and write it as a dierence equation:
xk eaT xk 1 1 eaT uk 1
As expected, the coecients of the dierence equation are dependent on the sample
time.
EXAMPLE 7.3
Use z transform approximations to solve for the sampled step response of a rst-
order system having a time constant, . Calculate the result using both a sample time
equal to 1/2 of the system time constant and equal to 1/4 of the system time constant.
Compare the approximate results (found at each sample time) to the theoretical
results.
First, let us use the dierence equation from earlier and substitute in our
sample times. This leads to the following two dierence equations.
We know from earlier discussions that at one time constant, and in response to a unit
step input, we will have reached a value of 0.632 (63.2% of nal value). To reach the
time equal to one time constant we need two samples for case one (T 1=2) and
four samples for case two (T 1=4).
Finally, we can calculate the approximate values at each sample time and
compare them with the theoretical value as shown in Table 4. When we compare
these results with those in Tables 2 and 3 we see the advantage of using z transforms
to model the computer hardware. Even at low sample times the results match the
analytical solution exactly. In fact, when we examine the dierence equations, we see
that eaT as used in the dierence equations corresponds to the et in the continuous
time domain response equation.
EXAMPLE 7.4
Use Matlab to solve for the sampled step response of a rst-order system having a
time constant, . Calculate the result using both a sample time equal to 1/2 of the
328 Chapter 7
Sample No. Actual value Difference eq. Actual value Difference eq.
system time constant and equal to 1/4 of the system time constant. Plot the approx-
imate results (found at each sample time) to the theoretical results.
Matlab can also be used to quickly generate z-domain transfer functions. Using
the following commands in Matlab will generate our rst-order transfer function,
dene the same sample time, and convert the continuous system to a discrete system
using the ZOH model. There are several models available in Matlab for approximat-
ing the continuous system as a discrete sampled system.
Using the step command allows us to compare the continuous system response and
the discrete sampled response for each sample time as shown in Figures 8 and 9.
From the step response plots we can easily see that although both sample times
are accurate during the time of the sample, the shorter sample time leads to a much
better approximation of the continuous system when reconstructed. As we will see in
subsequent chapters, there are many additional tools in Matlab that can used to
design and simulate discrete systems.
EXAMPLE 7.5
For the sampled system, Cz, derive the sampled output using
1. Dierence equations resulting from a transfer function representation;
2. Dierence equations resulting from the output function in the z-domain;
3. Matlab.
We begin with the discrete transfer function of the system, dened as
Cz 1
Gz 2
Rz z 0:5z
Recognizing that a step input in discrete form is given as
z 1
Unit step In the s-domain
z1 s
Then we can also write Cz as a transfer function subjected to a step input, Rz):
1 z 1
Cz Rz
z2 0:5z z 1 z2 0:5z
Cz 1 z2 z2
2
Rz z 0:5z z2 1 0:5z1
Cz 0:5z1 Cz z2 Rz
Cz 0:5z1 Cz z2 Rz
ck 0:5ck 1 rk 2
Assuming initial conditions equal to zero allows us to calculate the sampled output
(rst eight samples) as
k0 c0 0 0 0
k1 c1 0 0 0
k2 c2 0 1 1 (step input, Rz
k3 c3 0:5 1 1:5
k4 c4 0:51:5 1 1:75
k5 c5 0:51:75 1 1:875
k6 c6 0:51:875 1 1:9375
k1 c1 2:0
Notice that in this solution once the step occurs in the dierence equation, rk 2
always retains a value of the step input, in this case using unit step, equal to 1. This
diers from the second method, shown next.
Solution 2: Using the Sampled Output That Includes the Input Effects in
the z-Domain Representation
The procedure used to develop the general dierence equation remains the same and
we multiply the numerator and denominator by z2 , cross-multiply, and write the
dierence equation. Now the output sequence includes the eects from the step
input.
z
Cz
z 1z2 0:5z
z2
Cz
1 1:5z1 0:5z2
332 Chapter 7
Now we can cross-multiply and develop the dierence equation, recognizing that the
general input, rk, does not appear:
These are the same values calculated using the transfer function representation in
part 1.
Solution 3: Using Matlab to Simulate the Sampled Output
Many computer packages also enable us to quickly simulate the response of sampled
systems. We can dene the discrete transfer function in Matlab as
ence equations for a system, it is very easy to simulate the response on any digital
computer. Also, now that we are able to represent systems using discrete transfer
functions in the z-domain, we can apply the knowledge of stability, poles and zeros,
and root locus plots to design systems implemented digitally. Since we know how the
s-domain maps into the z-domain, we can easily dene the desired pole/zero loca-
tions in the z-domain, with the key dierence that we can also vary the system
response (pole locations) by changing the sample time.
One method for moving from the s-domain into the sampled z-domain is to
simply map the poles and zeros from the continuous system transfer function into
the equivalent poles and zeros in the z-domain using the mapping z esT . This
method is examined more in Chapter 9 were it is presented as a method for con-
verting analog controller transfer functions into the discrete representation, allowing
them to be implemented on a microprocessor.
xk 1 Kxk Lrk
ck Mxk Nrk
where x is the vector of states that are sampled; r is the vector of inputs; and c is the
vector (or scalar) of desired outputs. K, L, M, and N are the matrices containing the
coecients of the dierence equations describing the system.
Many of the same linear algebra properties still apply, only now the matrices
contain the coecients of the dierence equations. Instead of the rst dierential of
a state variable being written as a function of all states, the next sampled value is
written as a linear function of all previously sampled values. The order of the system,
or size of the square matrix K, depends on the highest power of z in the transfer
function. There are many equivalent state space representations and dierent forms
may be used depending on the intended use. One advantage of state space is that we
can use transformations to go from one form to another. For example, if we diag-
onalize the system matrix, the values on the diagonal are the system eigenvalues.
There are several ways to get the discrete system matrices, although it generally
involves one of the previous methods used to write the system response as a dier-
ence equation (or set of dierence equations). If we already have a discrete transfer
function in the z-domain we can write the dierence equations and develop the
matrices as illustrated in the next example.
Digital Control Systems 335
EXAMPLE 7.6
Convert the discrete transfer function of a physical system into the equivalent set of
discrete state space matrices.
Cz z
Gz
Rz z2 z 2
Cz z1
Gz
Rz 1 z1 2z2
ck ck 1 2ck 2 rk 1
ck 1 ck 2ck 1 rk
Since ck 1 depends on two previous values, we will need two discrete states so
that each state equation is in the form ck 1 f ks). Therefore, lets dene our
states as
x1 k ck
x2 k ck 1
Now substitute in and write the initial dierence equation as two equations where
each state at sample k 1 is only of function of states and inputs at sample k:
x1 k 1 ck 1 x1 k 2x2 k rk
x2 k 1 ck x1 k
To analyze the discrete state space matrices, the required operations are similar
as those used to nd the eigenvalues of the continuous system state space matrices.
Now, when we examine the left and right sides of the state equations, we see that
they are related through z1 instead of through s, as was the case with continuous
representations. Using this identity, the linear algebra operation remains the same
and we can solve for xk as
336 Chapter 7
xk 1 Kxk Lrk
Xz KXzz1 LRzz1
Xz KXzz1 LRzz1
I Kz1 Xz LRzz1
Now we can premultiply both sides by the inverse of (I Kz1 ) and solve for xz :
1
Xz I Zz1 LRzz1
Finally, we can substitute xz into the output equation and solve for cz:
1
Cz M I Kz1 LRzz1 NRz
or
h
1 i
Cz M I Kz1 Lz1 N Rz
As with the continuous system, we now have the methods to convert from discrete
state space matrices into a discrete transfer function representation. It still involves
taking the inverse of the system matrix and results in the poles and zeros of our
system.
EXAMPLE 7.7
Derive the discrete transfer function for the system represented by the discrete state
space matrices.
" # " #" # " #
x1 k 1 1 2 x1 k 1
xk 1 rk
x2 k 1 1 0 x2 k 0
" #
x1 k
ck 1 0 0 rk
x2 k
The relationship between discrete state space matrices and discrete transfer functions
has already been dened as
h
1 i
Cz M I Kz1 Lz1 N Rz
Combine and take the inverse of the inner matrices by using the adjoint and deter-
minant:
2 " #!1 " # 3
1 z1 2z1 1
Cz 4 1 0 z1 5 Rz
z1 1 0
" # " #
1 2z1 1
1 0 z1
1 1
z 1z 0
Cz Rz
1 z1 2z2
And nally, we can perform the nal matrix multiplications, resulting in
z1
Cz Rz
1 z1 2z2
Since we started this example using the discrete state space matrices from
Example 7.6 we can easily verify our solution. Recall that the original transfer
function from Example 7.6 was
Cz z
Gz 2
Rz z z 2
We see that we did the same result if we just multiply the top and bottom of our
transfer function by z2 to put it in the same form. Thus the methods develop in
earlier chapters for continuous system state space matrices are very similar to the
methods we use for discrete state space matrices, as shown here.
The process of deriving the discrete state space matrices becomes more dicult
when the input spans several delays (i.e., rk 1 and rk 2) and relies on rst
getting dierence equations or z transforms. To be more general, we would like
either to take existing dierential equations (which may be nonlinear) or, if linear,
to convert directly from the A, B, C, and D matrices already developed. The next two
methods address these cases.
If we begin with the original dierential equations describing the system, we
can simply write them as a set of rst-order dierential equations and approximate
the dierence equations using either the backward, forward, or bilinear approxima-
tion dierence algorithms. Represent each rst-order dierential by the dierence
equation and solve each one as a function of xk 1 f xk, xk 1 . . . ; rk,
rk 1 ) and then, if linear, represent in matrix form. Examples of three dierent
dierence equation approximations are given in Table 5. The procedure is similar to
those presented in Section 7.4.2 and is just repeated for each state equation that we
have. An advantage of using this method is that nonlinear state equations are very
easy to work with; the only dierence being that we cannot write the resulting non-
linear dierence equations in matrix form and use linear algebra techniques to
analyze them (as was done in Example 7.7). Of the three alternatives given in
Table 5 the bilinear transformation provides the best approximation but requires
slightly more work to perform the transformation.
Finally, although only introduced here, it is possible to approximate the trans-
formation itself, z esT , through a series expansion, allowing even better approx-
imations. Since computers can include many terms in the series, this provides good
338 Chapter 7
Backward rectangular _ xk 1=T
xk z1
x_ x
Tz
results when implemented in programs like Matlab. The assumption used with this
method is that the inputs themselves remain constant during the sample period.
While obviously not the case, unless sample times are large, it does provide a
good approximation. This allows us to represent our discrete system matrix, K, as
dened previously, to represent the outputs delayed one sample period relative to
our original system matrix, A, for the continuous system. Then we can include as
many of the series expansion terms as we wish:
AT 2 AT 3
KkT eAT I AT
2! 3!
where K is the discrete equivalent of our original system matrix A, T is the sample
period.
As with the continuous system matrices, A, B, C, and D, we can use the discrete
matrices derived in this section to check controllability, design observers and esti-
mators, etc. When we begin looking at discrete (sampled) MIMO systems later, this
will be the representation of choice.
Second-order systems use the rise time as the period of time where 410
samples are desired. Remember that these are minimums and, if possible, aim
for more frequent samples. In cases where we have a set of dominant closed
loop poles and thus a dominant natural frequency, we will nd that sampling
less than 10 times the natural frequency will no longer allow equivalence between
the continuous and sampled responses and they diverge. In these cases direct
design of digital controllers is recommended. If we can sample at frequencies
greater than 20 times the natural frequency, we nd that the digital controller
closely approximates the continuous equivalent. Since the systems natural fre-
quency is close to the bandwidth as measured on frequency response plots, the
same multipliers may be used with system bandwidth measurements. In most cases
where the sampling frequency is greater than 40 times that of the bandwidth or
natural frequency of our physical system, we can directly approximate our con-
tinuous system controller with good results.
One additional advantage should also be mentioned in regards to sampling
frequency. Better disturbance rejection is found with shorter sampling times.
Physically this can be understood as limiting the amount of time a disturbance
input can act on the system before the controller detects it and appropriate action
taken.
Finally, the real challenge for the designer is determining what the signicant
frequencies in our system are. The guidelines above are easily followed but all based
on the assumption that we know the properties of our physical system. Even if we
sample fast enough to exceed the recommendations relative to our primary
dynamics, it does not necessarily follow that we are sampling fast enough to control
all of our signicant dynamics. A signicant dynamic characteristic might be much
faster than the dominant system frequency and yet if it contributes in such a way as
to signicantly aect the nal response of our system we may have problems.
7.6 PROBLEMS
7.1 List three advantages of a digital controller.
7.2 What are primary components that must added to implement a digital con-
troller?
7.3 List two advantages of using centralized controller congurations.
7.4 List two advantages of using distributed controller congurations.
7.5 List two primary distinctions of digital controllers (relative to analog control-
lers) that must be accounted for during the design process.
7.6 Describe one advantage and one disadvantage of using analog controllers as
the basis for the design of digital controllers.
7.7 If our signal contains a frequency component greater than the Nyquist fre-
quency, what is created in our sampled signal?
7.8 To minimize the eects of aliasing, it is common to use what component in our
design?
7.9 What guideline should we use regarding sample rate if we wish to convert an
existing analog controller into an equivalent digital representation and experience
good results?
7.10 A sampled output, Cz, is given in the z-domain. Use dierence equations to
calculate the rst ve values sampled.
340 Chapter 7
1
Cz
z 0:1
7.11 A sampled output, Cz, is given in the z-domain. Use dierence equations to
calculate the rst 10 values sampled.
0:632z
Cz
z 1z2 0:736z 0:368
7.12 Use the z transform to derive the dierence equation approximation for the
function xt teat . Treat as a free response (no forcing function) and leave the
coecients of the dierence equation in terms of a and T.
7.13 Use the continuous system transfer function and apply a ZOH, convert into the
z-domain, derive the dierence equation, and calculate the rst ve values
(T 0:5 sec in response to a unit step input. Use partial fraction expansion if
necessary.
s3
Gs
ss 1s 2
7.14 Use the dierential equation describing the motion of a mass-spring-damper
system and
a. Derive the continuous system transfer function.
b. Apply a ZOH and derive the discrete system transfer function.
c. Using T 1 sec, write the dierence equations from the discrete transfer
function, and solve for the rst eight values when the input is a unit step.
d 2y dy
5 6y rt
dt2 dt
7.15 Develop the rst ve sampled values for a rst-order system described as
having a system time constant equal to 2 sec. Assuming a sample time of 0.8 sec,
use the dierentiation approximation, numerical integration, and z transforms to
develop a dierence equation for each method. Use a table to calculate the rst ve
sampled values for each dierence equation and compare the results. The outputs are
in response to a unit step input.
7.16 Set up and use a spreadsheet to solve problem 7.15.
7.17 Convert the discrete transfer function into the equivalent discrete state space
matrices.
z
Gz 2
z 2z 1
7.18 Use the discrete state space matrices and solve for the equivalent discrete
transfer function.
" # " 2 #
1 T T =2
xk 1 xk rk
0 1 T
yk 1 0xk
Digital Control Systems 341
7.19 Using the dierence equation describing the response of a physical system,
develop the equivalent discrete transfer function in the z-domain.
yk 0:5yk 1 0:3rk 1
7.20 Using the dierence equation describing the response of a physical system,
develop the equivalent discrete transfer function in the z-domain.
yk 0:5yk 1 0:3yk 2 0:2rk
8
Digital Control System Performance
8.1 OBJECTIVES
To relate analog control system performance to digital control system
performance.
To demonstrate the eects and locations of digital components.
To examine the eects of disturbances and command inputs on steady-state
errors.
To develop and dene system stability in the digital domain.
8.2 INTRODUCTION
This chapter parallels Chapter 4 in dening the performance parameters for control
systems. The dierence is that the parameters are examined in this chapter with
respect to digital control systems, not analog, as done earlier. By using the z trans-
form developed in the previous chapter, many of the same techniques can still be
applied. Block diagram operations are identical once the eects of sampling the
system are included, and the concept of stability on the z-plane has many parallels
to the concept of stability on the s-plane. The measurements of system performance,
since they still deal with the output of the physical system in response to either a
command or disturbance input, remain the same, and we have new denitions for the
nal value theorem and initial value theorem for use with transfer functions in the z-
domain. An underlying theme is evident throughout the chapter; in addition to the
parameters that aected steady state and transient performance in analog systems,
we now have the additional eects of quantization (nite resolution) and sampled
inputs and outputs that also aect the performance.
343
344 Chapter 8
develop the discrete transfer function that includes the eects of the sample time. To
simplify the procedure, we can take each sampler (on the command and feedback
paths) and, since the sample occurs at the same time, move them past the summing
junction as represented by the single sampler. Physically, this results in the same
sampled error because we get the same error whether we sample each signal sepa-
rately and then calculate the error or whether we sample the error signal itself.
Using the single sampler and ZOH now allows us to substitute the ZOH model
into the block diagram (s-domain) and, along with the physical system model, con-
vert from the s-domain to the z-domain as shown in Figure 3. As we see in the block
diagram and remembering our model of the ZOH, the eects of the sampler are
included in the ZOH since it is dependent on the sample time, T. The result is a single
closed loop transfer function, but in the z-domain and including the eects of our
digital components. Now we can apply similar analyses to determine steady-state
error and transient response.
Remember from the previous chapter that the equations modeled by the dis-
crete block diagrams are now dierence equations as opposed to continuous func-
tions (dierential equations). Figure 4 gives several common discrete block diagram
components and the equations that they represent. They perform the same role as in
our analog systems, only they rely on discrete sets of data, represented as dierence
equations.
The reduction of block diagrams in the z-domain is very similar to the reduc-
tion of block diagrams in the s-domain. The primary dierence is locating and
modeling the ZOHs in the system. One item must be mentioned since it is not
obvious based on our knowledge of continuous systems. Figure 5 illustrates the
problem when we attempt to take two continuous systems, separately convert
each into sampled systems, and then multiply to obtain the total sampled input
output relationship. As is clear in the gure, Ga 6 Gb because the additional sampler
is assumed in Gb even though the output of G1 and the input of G2 are continuous.
346 Chapter 8
In other words, the input of G2 is not sampled since it is based directly on the
continuous output signal of G1.
In general terms then,
When we model the discrete and continuous systems, we must be aware of where the
samplers are in the system and treat them accordingly. If a sampler exists on the input
and output, the z transform applies to all blocks between the two samplers.
We can show how this works by reducing the block diagram given in Figure 6.
Relative to the forward path only Gs is between two samplers and needs to have the
z transform applied accordingly. When we consider the complete loop, then Gs and
Hs are between two samplers and the transform should take this into account.
When we apply the ZOH to the forward path and total loop as described, we can get
the following transfer function in the z-domain:
h i
Cz Dz 1 z 1 Z 1s Gs
h i
Rz 1 Dz 1 z 1 Z 1 GsHs
s
Once we have the closed loop transfer function we have many options. If we
want the time response of the system, we can write the dierence equations from the
discrete transfer function and calculate the output values at each sample time as done
in the previous chapter. Also, as demonstrated in the remainder of this chapter, we
can use the nal value theorem (FVT) (in terms of z) to nd the steady-state error or
develop a root locus plot to aid in the design of the controller or to predict the
transient response characteristics.
At this point we still need to dierentiate between the sampled output, C s, and the
continuous output, Cs. If we move the sampler before the feedback path and look
at the sampled output, Cz, then we can collect terms and solve for the output, Cz,
as
Z G2sDs
Cz hG1sG2si
1 Gc z 1 z 1 Z s
The interesting dierence, as compared to analog systems, is that once we add the
disturbance we can no longer reduce the block diagram to a single discrete transfer
function. We cannot transform G2s and Ds independently of each other since
there is not a sampler between them. This limits us in trying to solve for Cz since
there is a portion of the equation which remains dependent on Ds. If we dene the
disturbance input in the s-domain and multiply it with G2s before we take the z
transform, we can solve for the sampled system response to a disturbance. This is a
general problem whenever we have an analog input acting directly on some portion
of our system without including a sampler.
z 1
FVT z yk ! 1 yss lim Yz
z!1 z
IVT z y0 y0 lim Yz
jzj!1
Now the same procedures learned earlier can be used to determine the steady-state
error from dierent controllers.
EXAMPLE 8.1
Using the continuous system transfer function, nd the initial and nal values using
the discrete forms of the IVT and FVT. Assume a unit step input and a sample time
equal to 0.1 seconds.
6
Gs
s2 4s 8
The rst thing we must do is convert from the continuous domain to the discrete,
sampled, domain. Write Gs in the form of
6 20 6 a2 b2
Gs
20 s 22 4 20 s a2 b
Now we can use the tables in Appendix B where a 2 and b 4, and the transform
for the portion inside the brackets is
z Az B
z 1 z2 2ze aT cosbT e 2aT
a
A 1 e aT cosbT e aT sinbT
b
2aT a aT
Be e sinbT e aT cosbT
b
Recognizing that [z=z 1 cancels with portion of the ZOH outside of the trans-
form, including the 6/20 factor, and substituting in for a, b, and T, results in
0:026z 0:023
Gz
z2 1:605z 0:670
This is now the discrete transfer function approximation of the continuous system
transfer function. To nd the initial and nal values, we can apply the discrete forms
of the IVT and FVT. For both cases we need to add the step input since we have only
derived the discrete transfer function, Gz, not the system output Yz. In discrete
form the step input is simply
1 z
Discrete Unit Step Z
s z 1
350 Chapter 8
To get the initial value, multiply Gz by the step input and let z approach innity:
0:026z2 0:023z
y0 y0 lim Yz lim 0
jzj!1 jzj!1 z 1z2 1:605z 0:670
With the discrete FVT the step input and the term included with the theorem
cancel, as they did with the continuous form of the theorem. For a unit step input
only, we can thus simply let z approach unity in the discrete transfer function to
solve for the nal value of the system.
0:026z 0:023
yk ! 1 yss lim 0:754
z!1 z2 1:605z 0:670
We know from the original transfer function in the s-domain that the nal value does
approach 6/8, or 0.75. In the discrete form we introduce some round-o errors,
although minor in terms of what we are trying to accomplish in control system
design.
EXAMPLE 8.2
Using the continuous system transfer function, nd the discrete initial and nal
values. Assume a unit step input and a sample time equal to 0.1 seconds. Use
Matlab to perform the conversion and plot the resulting step response to nd the
discrete initial and nal values.
6
Gs
s2 4s 8
Matlab allows us to dene the continuous system transfer function, designate the
sample time and desired model of the sampling device, and it then develops the
equivalent discrete transfer function. The command used in this example are given as
% Program to to verify IVT and FVT
%using z-domain transfer function
When these commands are executed, Matlab returns the following discrete transfer
function:
0:0262z 0:02292
Gz
z2 1:605z 0:6703
which is identical to one developed manually in the previous example. The resulting
step responses of the continuous and discrete systems are given in Figure 9.
In conclusion, with discrete systems the FVT and IVT still apply and can be
used to determine the nal and initial values of a system. The procedure learned with
analog systems is used also for digital systems, with two exceptions. First, when we
close the loop we must be careful where we apply the samplers and ZOH eects. The
z transform is only applied between two samplers. Second, when we have inputs
acting directly on the continuous portion of our physical system (i.e., disturbances),
Digital Control System Performance 351
Figure 9 Example: Matlab step responses of continuous and discrete equivalent systems.
we cannot close the loop and solve for the closed form transfer function without
knowing what the disturbance input is since it is include in the z transform and is not
a sampled input.
method is still used often when verifying the nal design since it is easy to get and is
easily programmed into computers or calculators to obtain responses. Spreadsheets
can easily be congured to solve for and plot sampled responses.
For the times when we want to know how the response is aected by dierent
parameters changing and we would rather not be required to calculate the dierence
equation for each case, we use the root locus plots. Fortunately, as the root locus
plots allowed us to predict system performance and design controllers in the con-
tinuous realm, the same techniques apply for discrete systems. Since the z-domain is
simply a mapping from the s-domain where z esT , we can apply the same rules but
to the dierent boundaries determined by the mapping between s and z. In other
words, when we close the loop the same magnitude and angle conditions are still
required to be met when the root locus plot is developed. This leads to us using the
identical rules as presented for analog systems represented in the s-domain.
To begin our discussion of stability using root locus techniques, let us see where
our original stable region in the s-plane occurs when we apply the transform to get
into the corresponding z-plane. The method is quite simple; since we know what
conditions are required in the s-plane for stability, apply those values of s into the
transform in the z-domain and see what shape and area the new stability region is
dened by. Our original pole locations in continuous systems were expressed as
having (possibly) a real and imaginary component, where
s j!
Then substituting s into z esT , results in
z ej!T eT ej!T
Knowing that the system is stable whenever s has a negative real part, < 0,
and is marginally stable when 0, we can determine the corresponding stability
region in the z plane. If 0, regardless of j!T (oscillating component), then
e0 1, dening the equivalent stability region in the z-domain. All points that
have a constant magnitude of one relative to the origin are simply those dening a
unit circle centered on the origin, and thus dening the stable region in z. When
< 0, e is always less than 1 and approaches 1 as approaches zero from the left
(negative). Therefore it is the area inside the unit circle that denes a stable system
and the circle itself denes the marginally stable border. We can determine additional
properties by holding all parameters constant except for one, varying the one in
question, and mapping the resulting contour lines. When this is done, the z-plane
stability regions and response characteristics can be found with respect to lines of
constant damping ratio and natural frequency. The contours of constant natural
frequencies and damping ratios are shown in Figure 10.
Any system inside the unit circle will be stable and the unit circle itself repre-
sents where the damping ratio approaches zero (marginal stability). What is inter-
esting in the z-plane is the added eect of sample time. By changing the sample time
we actually make the poles move on the z-plane. In fact, as the sample period
becomes too long, the system generally migrates outside of the unit circle, thus
becoming unstable. Knowing the natural frequency and damping ratio contour
lines are not as helpful in the z-plane since their shape excludes the option of an
easy graphical analysis unless special grid paper is used. However, most programs
like Matlab can overlay the locus plot with the grid and thus enable the same
Digital Control System Performance 353
controller design techniques learned about with continuous system design methods.
Several observations can be made about z-plane locations:
The stability boundary is the unit circle and jzj 1.
In general, damping decreases from 1 on the positive real axis to 0 on the
unit circle the farther out radially we go.
The location z 1 corresponds to s 0 in the s-plane.
Horizontal lines in the s-plane (constant !d ) map into radial lines in the
z-plane.
Vertical lines in the s-plane (constant decay exponent , or 1=) map into
circles within the unit circle in the z-plane.
As done with analog systems in Figure 9 of Chapter 3, we can show how responses
dier, depending on pole locations in the z-plane, as demonstrated in Figure 11.
1 From the open loop transfer function, GzHz, factor the numerator and
denominator to locate the zeros and poles in the system.
2 Locate the n poles on the z-plane using xs. Each loci path begins at a pole, hence
the number of paths are equal to the number of poles, n.
3 Locate the m zeros on the z-plane using os. Each loci path will end at a zero, if
available; the extra paths are asymptotes and head towards innity. The number
of asymptotes therefore equals n m.
4 To meet the angle condition, the asymptotes will have these angles from the positive
real axis:
If one asymptote, the angle 180 degrees
Two asymptotes, angles 90 degrees and 270 degrees
Three asymptotes, angles 60 degrees and 180 degrees
Four asymptotes, angles 45 degrees and 135 degrees
5 The asymptotes intersect the real axis at the same point. The point, , is found by
sum of the poles) sum of the zeros
number of asymptotes
6 The loci paths include all portions of the real axis that are to the left of an odd
number of poles and zeros (complex conjugates cancel each other).
7 When two loci approach a common point on the real axis, they split away from or
join the axis at an angle of 90 degrees. The break-away/break-in points are
found by solving the characteristic equation for K, taking the derivative w.r.t. z,
and setting dK=dz 0. The roots of dK=dz 0 occurring on valid sections of the
real axis are the break points.
8 Departure angles from complex poles or arrival angles to complex zeros can be
found by applying the angle condition to a test point in the vicinity of the root.
9 Locating the point(s) where the root loci path(s) cross the unit circle and applying
the magnitude condition nds the point(s) at which the system becomes unstable.
10 The system gain K can be found by picking the pole locations on the loci path that
correspond to the desired transient response and applying the magnitude
condition to solve for K. When K 0 the poles start at the open loop poles, as
K ! 1 the poles approach available zeros or asymptotes.
Digital Control System Performance 355
EXAMPLE 8.3
Convert the continuous system transfer function into the discrete equivalent using
the ZOH approximation. Determine the poles and zero when:
a. Sample time, T 0:1s
b. Sample time, T 10s
Comment on the system stability between the two cases.
4
Gs
ss 4
To convert from the continuous into the discrete domain we need to apply the ZOH
and take the z transform:
z 1 1 4
Gjz Z 2
z s s4
Letting a 4 and using the z transform from the table:
aT
z 1 1 a z 1 z aT 1 e z 1 e aT aTe aT
Gz Z 2
z s s1 z az 12 z e aT
EXAMPLE 8.4
Develop the root locus plot for the system given in Figure 12, when the sample time
is T 0:5 sec. Describe the range of responses that will occur and compare them
with the results obtained when the system is implemented using an analog controller
instead of a digital controller.
356 Chapter 8
To develop the root locus plot, we rst need to derive the discrete transfer
function for the system. To do so we apply the ZOH model to the continuous system
and take the z transform of the system. With the ZOH the system becomes
z 1 1 4
Gz Z
z s s 22
Letting a 2 and using the z transform from the table:
" #
z 1 1 a2
Gz Z
z s s a2
aT
There is also one break-in point that can be found by solving the character-
istic equation for K and take the derivative with respect to z. The character-
istic equation is found by closing the loop and results in the following
polynomial:
0:264z 0:0135
1K 2
0
z 0:736z 0:135
z2 0:736z 0:135
K
0:264z 0:013
Taking the derivative with respect to z:
dK 0:264z2 0:270z 0:135
dz 0:264z 0:0132
To solve for the break-in point we set the numerator to zero and nd the
roots.
z 1:39 and 0:368
One root is lies on valid break-in section of real axis while, as expected, the
second root coincides with the location of the two real poles and the corre-
sponding break-away point. Thus the break-in point is at z 1:4.
Step 8: The angles of departure are 90 degrees when the loci paths leave the
real axis (at the two repeated poles). We can now plot our nal root locus
plot as shown in Figure 14.
Step 9: The system becomes unstable when the root loci paths leave the unit
circle. Although one pole does return at higher gains, there is always one
pole (on asymptote) that remains unstable. If we wished to determine at
what gain the system crosses the unit circle, we can either use the magnitude
condition and apply it to the intersection point of the root loci paths and the
unit circle or we could close the loop and solve for the gain K that causes the
magnitude to become greater than 1 (square root of the sum of the real and
imaginary components squared).
358 Chapter 8
Step 10: A similar procedure as that described in step 9 can be used to solve for
the gain that results in the desired response characteristics. As with analog
systems in the s-domain we can use the performance specications to deter-
mine desired pole locations in the z-plane. The diculty, however, is that
now the lines of constant damping ratio and natural frequency are nonlinear
and to use a graphical method we must overlay our root locus plot with a
grid showing these lines (see Figure 10). As the next example demonstrates,
this task is much easier to accomplish using a software design tool such as
Matlab.
To conclude this example, let us plot the root locus plot in the s-domain
assuming that we have an analog controller as in earlier chapters. Then we can
compare the dierences regarding stability between continuous (analog) and discrete
(sampled) systems. Referring back to our original system described by the block
diagram in Figure 12 and only including our original continuous system (ignoring
the ZOH and sampler), we see that we have a second-order system with two repeated
poles at s 2 and no zeros. The root locus plot is straightforward with two
asymptotes that intersect the axis at s 2, no valid sections of real axis, and the
poles immediately leaving the real axis and traveling along the asymptotes. The
continuous system root locus plot is shown in Figure 15.
Comparing Figure 14 and Figure 15 allows us to see an important distinction
between analog and digital controllers. Whereas the analog only system never goes
completely unstable (crosses into the RHP), the sampled (digital) system now leaves
the unit circle in the z-plane and will become unstable as gain is increased. Adding
the digital ZOH and sampler tends to decrease the stability in our system (it always
adds a lag, as noted earlier) and stable systems in the continuous domain may
become unstable when their inputs and outputs are sampled digitally.
EXAMPLE 8.5
Using Matlab, develop the root locus plot for the system given in Figure 16 when the
sample time is T 0:5 sec. Tune the system to have a damping ratio equal to 0.7 and
plot the response of the closed loop feedback system when the input is a unit step.
Digital Control System Performance 359
Figure 15 Example: equivalent continuous system root locus plot for comparison (second
order).
0:264z 0:0135
Gz
z2 0:736z 0:135
Figure 17 Example: Matlab discrete root locus plot and grid overlay.
This agrees with our result from the previous example. The corresponding root locus
plot as generated by Matlab is given in Figure 17.
Placing the crosshairs where the root locus plot crosses the line of damping
ratio equal to 0.7 returns our gain K 0:45. Now we can close the loop with K and
use Matlab to generate the step responses for the continuous and sampled models.
This comparison is given in Figure 18
From the discrete step response plot we see that we reach a peak value of 0.32
and a steady-state value of 0.31, corresponding to a 4% overshoot. This agrees very
well with the expected percent overshoot from a system with a damping ratio equal
to 0.7.
In conclusion, we see that the stability of systems that are sampled and include
a ZOH is reduced when compared with the continuous equivalent. Not only does the
gain aect our stability (as with analog systems), now the sample time changes the
shape of our root locus plot because it changes the locations of the system poles and
zeros (as easily seen in the dierent z transforms). Although the guidelines used to
develop root locus plots remain the same, the process of determining the desired pole
locations on the loci paths becomes more dicult on the z-plane due to nonlinear
lines of damping ratio and natural frequency. As we progress then, we tend to use
computers in increased roles during the design process. It is important to remember
the fundamental concepts even when using a computer since most design decisions
can be made and most errors can be caught (even though the computer will likely
think everything is solved) based on what we know the overall shapes should be. This
frees us to use the computer to then help us with the laborious details and calcula-
tions.
8.5 PROBLEMS
8.1 When we close the loop on a sampled system, the z transform applies to all
blocks located where?
8.2 To obtain the closed loop transfer function of digitally controlled systems, the
input and output must be _____________.
8.3 Discrete transfer functions may be linear or nonlinear. (T or F)
8.4 Dierence equations may be linear or nonlinear. (T or F)
8.5 Given the discrete transfer function, use the dierence equation method to
determine the output of the system. Let rt be a step input occurring at the rst
sample period and calculate the rst ve sampled system response values. Using the
FVT in z, what is the nal steady-state value?
Cz z
2
Rz z 0:1z 0:2
8.6 Given the discrete output in the z-domain, use the dierence equation method
to determine the output of the system. Calculate the rst ve sampled system
response values. Using the FVT in z, what is the nal steady-state value?
z3 z2 1
Yz
z3 1:3z2 z
8.7 Given the discrete output in the z-domain, nd the initial and nal values using
the IVT and FVT, respectively.
zz 1
Yz
z 1z2 z 1
8.8 Given the continuous system transfer function:
a. Using the FVT in s, what is the steady-state output value in response to a
unit step input for the continuous system?
b. Applying a sampler and ZOH with a sample time of 1 s, derive the equiva-
lent discrete transfer function.
362 Chapter 8
8.10 Given the block diagram in Figure 20, develop the discrete transfer function
and solve for the range of sample times for which the system is stable (see Problem
8.9).
8.11 Using the system transfer function given, develop the sampled system transfer
function and solve for the range of sample times for which the system is stable.
1
Gs
ss 1
8.12 For the continuous system transfer function given, derive the sampled system
transfer function, the poles and zeros of the sampled system, and briey describe the
type of response. If required, use partial fraction expansion.
10s 1
Gs
ss 4
z2 z
Gz
z2 0:1z 0:2
8.14 Sketch the root locus plot for the system in Figure 21 when the sample time is
0.35 s.
8.15 Sketch the root locus plot for the system in Figure 22 when the sample time is
0.5 s.
8.19 For the system transfer function given, use the computer to
a. Convert the system to a discrete transfer function using the ZOH model and
a sample time of 0.5 sec.
b. Draw the root locus plot.
c. Solve for the gain K required for a closed loop damping ratio equal to 0.7.
d. Close the loop and generate the sampled output in response to a unit step
input.
s 4
Gs
ss 1s 6
364 Chapter 8
8.20 For the system transfer function given, use the computer to
a. Convert the system to a discrete transfer function using the ZOH model and
a sample time of 0.25 sec.
b. Draw the root locus plot.
c. Solve for the gain K required for a closed loop damping ratio equal to 0.5.
d. Close the loop and generate the sampled output in response to a unit step
input.
s 2s 4
Gs
s 3s 6s2 3s 6
9
Digital Control System Design
9.1 OBJECTIVES
Develop digital algorithms for analog controllers already examined.
Develop tools to convert from continuous to discrete algorithms.
Discuss tuning methods for digital controllers.
Develop methods to design digital controllers directly in the discrete
domain.
9.2 INTRODUCTION
If we enter the z-domain the design of digital control algorithms is almost identical to
the design of continuous systems, the obvious dierence being the sampling and its
eect on stability. Since digital control algorithms are implemented using micropro-
cessors, the common representation is dierence equations. Although these require
some previous values to be stored, it is a simple algorithm to program and use. In
fact, any controller that can be represented as a transfer function in the z-domain is
quite easy to implement as dierence equations, the only requirement being that is
does not require future knowledge of our system. When we get nonlinear and/or
other advanced controllers that are unable to be represented as transfer functions in
z, the design process becomes more dicult. Nonlinear dierence equations, how-
ever, are still quite simple to implement in most microprocessors once the design is
completed.
365
366 Chapter 9
algorithms are not capable of achieving the proper response, we can directly design
digital controller algorithms using the skills from the previous two chapters.
ek ek 1
Backward difference:
T
ek 1 ek
Forward difference:
T
ek 1 ek 1
Central difference:
2T
Although the central dierence method provides the best results since it averages
over two sample periods, a future error value is needed and so from a programming
perspective it is not that useful. The same is true for the forward dierence approx-
imation. In general, then, the backward dierence is commonly used to approximate
the derivative term in our PID algorithm.
The same concepts can be used for approximating the integral where the
area under error curve between samples can be given as three alternative dierence
equations:
Approximating an integral:
The trapezoidal rule gives the best approximation since it integrates the area deter-
mined by the width, T, and the average error, not just the current or previous. To
operate as an integral gain, it must continually sum the error and thus must include a
memory term as shown in the following trapezoidal approximation. Otherwise, the
error is only that which accumulated during the current sample period.
Finally, realizing that the proportional term is just uk Kp ek, we can pro-
ceed to write the dierence equations for PI and PID algorithms, recognizing that
dierent approximations will result in slightly dierent forms. Two forms are given,
termed the position and velocity (or incremental) algorithms. The position algorithm
results in the actual controller output (i.e., command to valve spool position, etc.),
while the velocity algorithm represents the amount to be added to the previous
controller output term. This is seen in that the error is only used to calculate the
amount of change to the output and then it is simply added to the previous output,
uk 1. Velocity algorithms have several advantages in that it will maintain its
position in the case of computer failure and that it is not as likely to saturate
actuators upon startup. This is an easy way to implement bumpless transfer for
cases where the controller is switched between manual and automatic control. In a
normal position algorithm the controller will continually integrate the error such
that when the system is returned back to automatic control, a large bump occurs.
Bumpless transfer can be implemented in position algorithms by initializing the
controller values with the current system values before switching back to automatic
control. The velocity (incremental) command can also be used to interface with
stepper motors by rounding the desired change in controller output to represent a
number of steps required by the stepper motor. In this role the stepper motor acts as
the uk 1 term since it holds it current position until the next signal is given.
Position PI algorithm using trapezoidal rule:
uk Kp ek sk
T
sk sk 1 Ki ek ek 1
2
When implementing the position algorithm, the integral term, sk, must be calcu-
lated separately so that it is available for the next sample time as sk 1.
To derive the velocity form of the PID algorithm we take uk, decrement k by
1 in each term, resulting in another expression for uk 1, and subtract the two
resulting algorithms. After we simplifying the result of the subtraction we can write
the velocity algorithm.
Velocity PI algorithm using trapezoidal rule:
T
uk uk 1 Kp ek ek 1 Ki ek ek 1
2
Or we may collect and group terms according to the error term and express the
velocity PI algorithm in such a way where it is more amendable to programming:
T T
uk uk 1 Kp Ki ek Ki Kp ek 1
2 2
Finally, the complete PID algorithm can be derived using a trapezoidal
approximation for the integral and a backward rectangular approximation for the
derivative.
368 Chapter 9
Kd
uk Kp ek sk ek ek 1
T
T
sk sk 1 Ki ek ek 1
2
We can follow the same procedure as with the PI and derive the velocity (incremen-
tal) representation as
T
uk uk 1 Kp ek ek 1 Ki ek ek 1
2
Kd
ek 2ek 1 ek 2
T
Sometimes the integrated error is only included in the term uk 1 and Ki only acts
on the current error level, in which case the algorithm simply becomes (common in
PLC modules):
T
uk uk 1 Kp ek ek 1 Ki ek
2
Kd
ek 2ek 1 ek 2
T
Finally, if so desired, we can express the PID algorithm using integral and derivative
times, Ti and Td , as used in earlier analog PID representations and Ziegler-Nichols
tuning methods:
T
uk uk 1 Kp ek ek 1 ek ek 1
Ti
Td
ek 2ek 1 ek 2
T
Several modications to the algorithms are possible which help them to be more
suitable under dicult conditions. First, it is quite easy to prevent integral windup by
limiting the value of sk to maximum positive and negative value using simple if
statements. Also, modifying the derivative approximations can help with noisy sig-
nals. The derivative approximation can be further improved by averaging the rate of
change of error over the previous four (or whatever number is desired) samples to
further smooth out noise problems. The disadvantage is that it does require addi-
tional storage values and introduces additional lag into the system. A similar eect is
accomplished by adding digital lters to the input signals.
Finally, it is now easy to implement I-PD (see Sec. 5.4.1) since the physical
system output is already sampled and can be used in place of the error dierence
each sample period. It requires that we use the integral gain since it is the only term
that directly acts on the error. The new algorithm becomes
uk uk 1 Kp ck 1 ck Ki T rk ck
Kd
ck 2ck 1 ck 2
T
Digital Control System Design 369
Remember that many of the signs are reversed because ek rk ck and now
only ck, the actual physical feedback of the system, is used for the proportional and
derivative terms. This helps with set-point-kick problems.
Concluding this section on PID dierence equations, we see that there are
many options since the dierence equation is only an approximation to begin
with. Having the controller implemented as a dierence equation does make it
easy for us to use many of our analog concepts (e.g., I-PD) and approximate deri-
vatives. Many manufacturers of digital controllers (PLCs, etc.) have slight proprie-
tary modications designed to enhance the performance in those applications that
the components are designed for.
9.3.1.2 Conversion from s-Domain PID Controllers
Direct conversion from the s-domain into the z-domain is very easy (using a program
like Matlab) and quickly results in getting a set of dierence equations to approx-
imate any controller represented in the s-domain. This is advantageous when sig-
nicant design eort and experience has already been gained with the analog
equivalent. The disadvantages are the lack of understanding and thus the corre-
sponding loss of knowing what modications (in the digital domain) can be used
to build in certain attributes. It does allow us to account for the sample time (as least
in a limited sense) since the conversion methods use both the continuous system
information and the sample time.
Using our design experience and knowledge of PID controllers in the contin-
uous domain to develop the digital approximations works very well if the sample
time is great enough. It is feasible under these conditions to design the control system
using conventional continuous system techniques and simply convert the resulting
controller to the z-domain to obtain the dierence equations. This allows us to use
all our s-domain root locus and frequency techniques to design the actual controller.
As discussed earlier, if the sampling rate is greater than 20 times the system band-
width or natural frequency, the resulting digital controller will closely approximate
the continuous controller and the method works well; otherwise, it becomes bene-
cial to use the direct design methods discussed below, allowing us to account for the
sample and hold eects, while not being constrained to P, I, and D corrective
actions.
The simplest conversion is to simply use the transformation z esT and map
the pole and zero locations from the continuous (s) domain into the discrete (z)
domain. This is commonly called pole-zero matching and will be demonstrated
when developing digital approximations of phase-lag and phase-lead controllers.
The transformation is also the starting point for the bilinear, or Tustins, approx-
imation. If we solve the transformation for s, we get
1
s lnz
T
The bilinear transformation is the result when we perform a series expansion on s
and discard all of the higher order terms. The term that is retained then becomes our
rst order approximation of the transform. It is applied as follows:
2 z1
s
Tz1
370 Chapter 9
It can be shown that it is very similar to the trapezoidal approximation used in the
preceding sections. The bilinear transformation process can get tedious when devel-
oping the dierence equations by hand but is easily done using computer programs
like Matlab. The concept, however, is simple. To convert from the s-domain, we
simply substitute the transform in for each s that appears in our controller and
simplify the result until we obtain our controller in the z-domain.
EXAMPLE 9.1
Convert the PI controller, represented as designed in the s-domain, into the z-
domain using the bilinear transform. Write the corresponding dierence equation
for the discrete PI approximation.
10
Gc s 100
s
To nd the equivalent discrete representation substitute the bilinear transform in for
each s term.
100 200 10Tz 200 10T
Gc z Dz 100
2 z1 2z 1
Tz1
Consistent with our earlier results where sampled system transfer functions are
dependent on sample time, we see the same eect with our equivalent discrete con-
troller. To derive the dierence equation lets assume T 0:1s which results in a
discrete controller transfer function of
Uz 201z 199
Ez 2z 2
Multiplying the top and bottom by z1 allows us to write our dierence equation as
201 199
uk uk 1 ek ek 1
2 2
So we see that the bilinear transform can be used to develop approximate controller
algorithms represented as dierence equations.
EXAMPLE 9.2
Convert the PI controller, represented as designed in the s-domain, into the z-
domain using Matlab and the bilinear transform. Write the corresponding dierence
equation for the discrete PI approximation if the sample time is 0.1s.
10 100s 10
Gc s 100
s s
In Matlab we can dene the continuous system transfer function and convert to the
z-domain using the bilinear transform by executing the following commands:
>>num=[100 10];
>>den=[1 0];
>>sysc=tf(num,den)
>>sysz=c2d(sysc,0.1,tustin)
Digital Control System Design 371
This results in the following transfer function, identical to the one developed man-
ually in the preceding example.
Uz 100:5z 99:5
Ez z1
EXAMPLE 9.3
Given an analog phase-lead controller, use the pole-zero matching technique to
design the equivalent discrete controller. Express the nal controller as a dierence
equation that could be easily implemented in a digital microprocessor. Use a sample
time T 0:5s.
s1
Gc s 10
s5
To nd the equivalent dierence equations we will use the identity z esT and solve
for the equivalent z-plane locations. This will allow us to express the controller as a
transfer function in the z-domain and then to write the dierence equations from the
transfer function.
Beginning with our original analog controller, we note that the zero location is
at s 1 and the pole location is at s 5. If our sample time for this example is
1=2 second, we can use the identity to nd each discrete pole and zero location.
Zero location in z: e esT e10:5 0:61
Pole location in z: z esT e50:5 0:08
Now we can write the new controller transfer function as
z 0:61
Gc z K
z 0:08
We still must equate the steady-state gains using the two representations of the FVT.
This allows us to determine K for our discrete controller.
Digital Control System Design 373
Gc zjz1 Gc sjs0
z 0:61 s 1
Gc zjz1 K G sj s
z 0:08z1 c s0
s 5s0
Solve for K:
1 0:61 1
K 0:424K 10 2 K 4:72
1 0:08 5
Finally, the equivalent phase-lead controller in z is
Uz z 0:61
Gc s 4:72
Ez z 0:08
Cross-multiplying the controller transfer function and using z1 as our delay shift
operator enables us to derive our dierence equation.
Uzz 0:08 Ez4:72z 0:61
Uz 1 0:08z1 Ez4:72 1 0:61z1
Our nal dierence equation that represents the discrete approximation of the
original phase-lead controller is given as
uk 0:08uk 1 4:72ek 2:88ek 1
Remember that this particular dierence equation is developed with the assumption
that we will have a sample time equal to 1=2s. As is true with all digital controllers
derived in this fashion, when we change the sample time we must also update our
discrete algorithm. Of course, as the sample time becomes too long the design fails
altogether.
EXAMPLE 9.4
For the system given in Figure 1, use Matlab to design a phase-lead controller using
continuous system techniques. Use pole-zero matching to convert the controller into
the z-domain and verify that the system is stable and usable. Generate a step
response using Matlab.
A damping ratio of 0.5
Sample time, T 0:01s
Since this problem is open loop marginally stable, we need to modify the root loci
and move them further to the left. Thus a phase-lead controller is the appropriate
choice. We start by adding 36 degrees, (conservative choice) by placing the zero at
s 3 and the pole at s 22. Using Matlab allows us to choose the point where
Figure 1 Example: system block diagram for phase lead digital controller design using pole-
zero matching.
374 Chapter 9
the loci paths cross the radial line representing a damping ratio equal to 1/2 and nd
the gain K at that intersection point. The Matlab commands used to dene the
phase-lead controller and plant transfer function are
clear;
T=0.01;
numc=1*[1 3]; %Place compensator zero at -3
denc=[1 22]; %Place compensator pole at -22
nump=1; %Forward loop system numerator
denp=[1 0 0]; %Forward loop system denominator
sysc=tf(numc,denc); %Controller transfer function
sysp=tf(nump,denp); %System transfer function in forward loop
sysall=sysc*sysp %Overall compensated system in series
Now, execute the following commands and use Matlab to generate the root locus
plot and solve for K, giving us the root locus plot in Figure 2.
rlocus(sysp); %Generate original Root Locus Plot
hold;
rlocus(sysall); %Add new root loci to plot
sgrid(0.5,2); %place lines of constant damping
Kc=rlocnd(sysall)
As we see, this attracts our loci paths into the stable region and crosses the radial line
( 1=2) when K 75. Therefore our continuous system phase-lead controller can
be dened as
s3
Gc PhaseLead 75
s 22
To implement the controller digitally, we can use pole-zero matching to nd
the equivalent controller in the z-domain. Beginning with the analog controller, we
note that the zero location is at s 3 and the pole location is at s 22. Using our
Figure 2 Example: Matlab root locus plot with continuous system phase-lead compensa-
tion.
Digital Control System Design 375
sample time of 0:01s and the transformation identity allows us to nd each discrete
pole and zero location.
Zero location in z: z esT e30:01 0:97
Pole location in z: z esT e220:01 0:80
Now we can write the new controller transfer function as
z 0:97
Gc z K
z 0:80
We still must equate the steady-state gains using the two representations of the nal
value theorem. This allows us to determine K for our discrete controller.
Gc zz1 Gc sjs0
z 0:97 23
Gc zjz1 K jz1 Gc sjs0 75 j
z 0:80 s 22 s0
Solve for K:
1 0:97 3
K 0:15K 75 10:2 K
70
1 0:80 22
Finally, the equivalent phase-lead controller in z is
Uz z 0:97
Gc s 70
Ez z 0:80
To simulate the system using Matlab we need to dene the continuous system
transfer function, convert it to the z-domain, add our phase-lead controller, and
generate the step response. The commands that can be used to achieve are listed here.
numzc=70*[1 -0.970]; %Place zero at 0.97
denzc=[1 -0.8]; %Place pole at 0.8
To verify our design, let us use Matlab again and now generate the discrete
root locus plot to see how we have pulled our system loci paths into the unit circle
(stability region). We can use the commands given to convert our continuous system
into a discrete system with a sample time equal to 0:01s and with a ZOH applied to
the input of the physical system. The result is our root locus plot given in Figure 3.
%Verify the discrete root locus plot
gure;
rlocus(syszc*c2d(sysp,T,zoh)); zgrid;
Kz=rlocnd(syszc*c2d(sysp,T,zoh))
Since our discrete root locus plot does illustrate that our phase-lead controller sta-
bilizes the system, we expect that our step response will also exhibit the desired
376 Chapter 9
Figure 3 Matlab root locus plot of equivalent phase-lead compensation (pole-zero match-
ing).
response. Executing the following Matlab commands closes the loop and generates
the corresponding step response plot, given in Figure 4.
%Close the loop and convert to z between samplers
cltfz=(syszc*c2d(sysp,T,zoh))/(1+syszc*c2d(sysp,T,zoh))
gure;
step(cltfz,5);
Our percent overshoot with the digital implementation of our continuous system
design is approximately 35%, larger than the expected value of 15%. Further tuning
Figure 4 Example: Matlab step response of phase-lead digital controller (pole-zero match-
ing).
Digital Control System Design 377
could reduce this; such tuning is also easy to do in the z-domain as the next section
demonstrates.
It should be noted that given the small sample times and with both the pole and
zero being close to unity leads to a controller that is sensitive to changes in para-
meters. If the pole or zero location (or sample time) was to change, the controller
could become unstable. Fortunately, the digital controller allows us to consistently
execute the desired commands and the coecients do not change during operation.
So as we see in the example, it is quite simple to design a controller in the s-
domain and convert it to the z-domain using pole-zero matching techniques. The
same comments regarding sample time that were applied to converting other con-
trollers from continuous to digital still apply here.
with several special items relating to the z-plane. First, review the type of responses
relative to the location of system poles in the z-plane as given in Figures 10 and 11 of
Chapter 8. In general we will place poles in the right-hand plane (analogous to the s-
domain) and inside the unit circle. If sample times get long relative to the system
bandwidth, we will be forced to use the left-hand plane, an area without an s-domain
counterpart. The closer we get to the origin, the faster our response will settle. When
the poles are exactly on the origin, we have a special case called a deadbeat response,
which is a method presented in the next section.
The discrete root locus design process, when compared with analog root locus
methods, is nearly identical except for two items. First, let us examine the similarities
with the s-domain. When we designed our controller we used our knowledge of poles
and zeros and how they aected the shape of the loci paths to choose and design the
optimal controller. We follow the same procedure again and use our discrete con-
troller poles and zeros to attract the root loci inside the unit circle. The z-domain,
however, also accounts for the sample time and if the sample time is changed we need
to redraw the plot.
The two points where we diverge from the continuous system methods are
related. Since we no longer have to physically build the controller algorithm (i.e.,
with OpAmps, resistors, capacitors, etc.), we can place the poles and zeros wherever
we wish. This allows us additional exibility during the design process. The second
point, related to the rst, is that even though we do not build the algorithm, we still
must be able to program it into a set of instructions that the microprocessor under-
stands; this is our constraint on direct design methods. A controller that can be
programmed and implemented is often said to be realizable. The net eect is that
we do have more exibility when designing digital controllers, even when subject to
being realizable. The process of designing a control system in the z-domain and check-
ing whether or not it is realizable can best be demonstrated through several examples.
EXAMPLE 9.5
Consider designing a controller for the second-order marginally stable system:
1
Gs
s2
The desired specications are to have less than 17% overshoot and a settling time
< 15 sec. Using our second-order specications, this relates to a damping ratio of
0.5 and natural frequency of 0.5 rad/sec. The rst task is to convert the system to a
discrete transfer function. This is accomplished by taking the z transform of the
physical system with a ZOH.
z 1 1 z 1 T 2 z 1
Gz Z 2
z s z 2 z 13
Simplifying:
T2 z 1
Gz
2 z 12
To illustrate direct design methods, let us choose a long sample time of 0.5 sec, or 2 Hz.
After substituting in the sample time we get the following discrete transfer function:
Digital Control System Design 379
z1
Gz 0:125
z 12
Using the rules presented to develop root locus plots in Table 1 from Chapter 8, the
open loop uncompensated locus paths can be plotted as shown in Figure 6.
Remember that the rules remain the same and thus we have two poles at z 1
(marginally stable) and a zero at z 1. This means we have one asymptote (the
negative real axis) and the only valid section of real axis lies to the left of the zero at
z 1. The root locus plot is, as we would expect, unstable, since even the contin-
uous system is marginally stable. In fact, and as shown earlier, when we sample our
system we are no longer marginally stable but actually become unstable.
Using our knowledge of root locus we know that more than a proportional
controller is needed since the shape has to be changed to pull the loci into the stable
regions. In the same way that adding a zero adds stability in the s-plane, we can use
the same idea to attract the loci in the z-plane. Let us rst try placing a zero at z
1=2 to simulate a PD controller.
After placing the controller zero at z 1=2 and the new plot is constructed, we
get the compensated root locus plot shown in Figure 7. By adding the zero, we have
two zeros and two poles and thus no asymptotes. Additionally, we constrained the
only valid region on the real axis to fall within the stable unit circle region. At !n
3=10T 1:8 rad/sec, the damping is approximately 0.7 and all the conditions
have been met. To determine the gain, we can use the magnitude condition as
shown in earlier sections or use Matlab. At this point let us represent the gain as
K and verify that the controller is realizable (i.e., can be programmed as a dierence
algorithm). This is easily determined by developing the dierence equations from the
controller transfer function:
Gc z Kz 1=2
K
uk 1 K ek ek 1
2
Now we see that we have a problem implementing this particular controller since if
we shift one delay ahead to get uk, the desired controller output, we would also
need to know ek 1, or the future error. This is a common problem occurring
when the denominator is of lesser order than the numerator (in terms of z). To
remedy, let us go back and add another pole to the controller to increase the
order of the denominator by 1. This allows us to keep one of our valid real axis
root loci sections in the stable unit circle. After we add an additional pole at z
0:25 and move the zero to z 0:9, we can redraw the root locus plot as shown in
Figure 8.
Figure 8 Example: discrete root locus of system compensated with a pole and zero.
Digital Control System Design 381
Although adding the additional pole creates the situation where the system
does become unstable at high gains (the loci paths leave the unit circle and an
asymptote is now at 180 degrees), the compensator does attract all three paths
to the desired region if the proper gain is chosen. The original zero from the rst
controller attempt was moved to 0.9 to pull the paths closer to the real axis. Now we
can use Matlab to nd the gain that results in the desired response and controller
transfer function. The commands listed here dene the original system, convert it
into the z-domain using a ZOH, and generate the compensated and uncompensated
root locus plots. Finally, Matlab is used to close the loop and generate the closed
loop step response, verifying our design.
%Program commands to design digital controller in z-domain
clear;
T=0.5;
K=rlocnd(syszc1*sysz)
The Matlab plot showing the uncompensated and compensated root locus plots is
given in Figure 9.
Using the rlocfind command returns a gain of 3.4 when we place our three
poles all near the real axis ( 1). This results in the nal compensator transfer
function expressed as
Uz 3:4z 0:9
Gc z
Ez z 0:25
To implement our controller in a microprocessor we can cross-multiply and express
our transfer function as a dierence equation:
uk 0:25uk 1 3:4ek 3:06ek 1
This is realizable and easily implemented in a digital computer, as opposed to the
previous design that only placed one zero in the z-domain. For this dierence equa-
tion the controller output, uk, is only dependent on the current error, ek, and the
previous controller output and error input. It is also possible to reduce our memory
requirements by designing a similar controller that added the system pole directly on
382 Chapter 9
Figure 9 Example: Matlab discrete root locus plots of compensated and uncompensated
systems.
the origin, instead of at z 0:25. This has the desired eect of still making our
controller realizable and when we cross multiply the z added in the denominator only
acts as a shift operator on uk and does not create a need for storing uk 1. For
example, our transfer function would become
Uz Kz 0:9
Gc z
Ez z
uk Kek 0:9Kek 1
With this formulation only one storage variable, ek 1, is needed. This does have
the eect of modifying our root locus to that shown in Figure 10. When we constrain
the pole to be located at the origin, it does limit our design options but as shown we
still are able to stabilize the system while requiring one less storage variable.
To verify that we do achieve our desired response with the sample rate of 2 Hz,
we can use Matlab to generate the step response of the closed loop system. The step
response for the system compensated with a zero at z 0:9 and a pole at z 0:25
is given in Figure 11. As this example demonstrates, we can directly design a digital
controller in the z-domain using root locus techniques similar to those developed for
the s-domain; only the mapping is dierent, the rules for plotting are the same. The
digital controller stabilized the system, as shown in Figure 11, even though we added
a pole to the controller to make the algorithm realizable. Since the controller is
developed directly in the discrete domain and accounts for the additional lag created
by the sampling period, we can expect better agreement when implemented and
tested than when a continuous system controller is converted into the discrete
domain.
Digital Control System Design 383
Figure 10 Example: discrete root locus of system compensated with a pole at the origin and
a zero.
EXAMPLE 9.6
For the system given in Figure 12, use Matlab to design a discrete controller using
discrete root locus techniques in the z-domain. Verify that the system meets the
requirements and generate a step response using Matlab.
A damping ratio, 0:7
Steady-state error, ess 0
Settling time, ts 2 sec
Sample time, T 0:05 s
Figure 12 Example: system block diagram for discrete root locus controller design.
In the continuous domain the physical system is open loop stable with two poles
located at 2 3:5j, giving a natural frequency equal to 4 rad/sec and a damping
ratio equal to 1=2. To meet the system design goals, we will need to make the system
type 1 (add an integrator), meet the system damping ratio, and have a system natural
frequency greater than approximately 3 rad/sec.
To begin the design process, we will use Matlab to add a ZOH and convert the
physical system into an equivalent sampled system. The uncompensated root locus
plot is drawn and grid lines of constant damping ratio and natural frequency shown
on the plot.
%Program commands to design digital controller in z-domain
clear;
T=0.05;
Using these commands (in combination with the previous segment of commands)
generates the plot given in Figure 13 and results in a controller gain equal to 0.2.
After adding the compensator we actually add additional instability (lag) into
the system as shown by the outer root locus plot. Choosing root locations near z
0:9 allows us to retain our original dynamics (which were close to meeting the
dynamic requirements) while signicantly improving our steady-state error perfor-
Digital Control System Design 385
Figure 13 Example: Matlab discrete root locus plots of compensated and uncompensated
systems.
mance. As with a PI controller in the analog realm, this controller, by placing a pole
at z 1, tends to decrease the stability of the system. Finally, we will use the Matlab
commands listed to generate a sampled step response of the compensated and
uncompensated systems, shown in Figure 14.
%Close the loop and convert to z between samplers
cltf_c=feedback(0.2*syszc1*sysz,1)
cltf_uc=feedback(sysz,1)
From Figure 14 we see that the controller signicantly improves our steady-
state error while coming close to meeting our settling time of 2 sec. To complete the
solution, let us develop the dierence equation for our discrete controller. Since we
place one pole and one zero and have equal powers of z in the numerator and
denominator, it should be realizable. The controller transfer function is
Uz 0:2z 0:3
Gc z
Ez z1
And the new dierence equation becomes
uk uk 1 0:2ek 0:6ek 1
Our controller as designed is realizable and can easily be programmed as a sampled
input-output algorithm. Recognize that had we started with an analog PI controller
we would have achieved a similar dierence equation, at least in form.
In conclusion, we see how the root locus techniques developed in earlier analog
design chapters allow us to also design digital controllers represented in the z-
domain. The concept of using the time domain performance requirements (i.e.,
peak overshoot and settling time) can also be applied in the z-domain where we
pick controller pole and zero locations to achieve desired damping ratios and natural
frequencies. The primary dierence we noticed is that in addition to the analog root
locus observations of how dierent gains aect the root locus paths, changing the
sample period modies the pole and zero locations and thus changes the shape of the
discrete root locus plot. Picking the desired pole locations in the z-domain also
becomes more dicult since the lines of constant damping ratios and natural
frequencies are nonlinear.
1 The physical system must be able to achieve the desired effect within the span of
one sample for true deadbeat response.
2 The method replies on pole-zero cancellation and is thus very dependent on the
accuracy of the model used.
3 Good response characteristics do not guarantee good disturbance rejection
characteristics
4 The algorithm that results must be programmable (realizable). For general use it
must not require knowledge of future variables.
5 Approximate deadbeat response can be achieved by allowing the response to reach
the command over multiple samples; the intermediate values can also be dened.
388 Chapter 9
we can design for a deadbeat response to occur in a set number of sample periods,
thus providing more time for the system to respond. So for a deadbeat controller to
be feasible, it must be physically able to follow the desired command prole.
Designing the physical components to satisfy the desired prole or limiting the
input sequences to those that are feasible trajectories for the existing physical system
best accomplishes this.
2. The direct design method depends on pole-zero cancellation and is thus
highly dependent on model accuracy, especially if the original system poles were
originally unstable. For example, if the deadbeat controller cancelled a pole at 2
in the z-plane by placing a zero there and the physical system changed or was
improperly modeled, the unstable pole is no longer cancelled and will cause
problems.
3. The direct design controller is targeted at producing the desired response
and no guarantee is made with respect to disturbance rejection properties. These
should be simulated to verify satisfactory rejection of disturbances. It is possible to
achieve good characteristics when responding to a command input and yet have very
poor rejection of disturbances.
4. The controller resulting from direct response design methods must be
computationally realizable, that is, able to be implemented using a digital computer.
It is obvious from the example in the previous section that the lowest power of z1 in
the denominator must be less than or equal to the lowest power of z1 in the
numerator. Thus, the desired response Tz must be chosen where the lowest
power of z1 is equal to or greater than the lowest power of Gz. It is recommended
to add powers of 1 z1 ) to the numerator if it is of lower order than the denomi-
nator; add the number of powers required that make the orders equal.
5. Finally, it is common to develop large overshoots and/or oscillations
between samples with deadbeat controllers since they require large control actions
(high gains) to achieve the response and exact cancellation of poles and zeros seldom
occurs. Modifying the control algorithm to accept slower responses will alleviate
these tendencies. One option is to use something like a Kalman controller. A
Kalman controller chooses the output to be a series of steps toward the correct
solution, each step dening the desired output level at the corresponding sample.
If we wanted to reach a nal value of 1, then we would dene the intermediate values
such that each successive increase in output is the next coecient and where all the
coecients, when added, are equal to 1. Thus, if we wanted zero error in ve samples
(four sample periods) from a unit step input, instead of during one sample period, we
might use the output series:
Assuming that our system components are capable of the desired response and that
our model is accurate, we would have the following output value at each sample:
Sample 1 y1 0
Sample 2 y2 0:4
Sample 3 y3 0:7 (add the previous to the new, 0:4 0:3)
Sample 4 y4 0:9 (add the previous to the new, 0:7 0:2)
Sample 5 y5 1:0 (add the previous to the new, 0:9 0:1)
Digital Control System Design 389
As the example demonstrates we can use this same method to pick the shape of our
response for any system, subject of course to the physics of our system. This concept
is further illustrated in a later example problem.
In broader terms than described for deadbeat response, the advantage of direct
response design is that we can choose any feasible response type for our system. For
example, we can choose a controller that will cause our system to respond as if it is a
rst-order system with time constant, , as long is our system is physically capable
responding in such a way and we have accurate models of our system. We can choose
virtually any response that meets the following three criteria. One, the response must
be feasible. Two, the resulting controller algorithm must be realizable. And three, an
accurate model of the system must be available. Several examples further illustrate
the concept of direct design methods for digital controllers.
EXAMPLE 9.7
Design a deadbeat controller whose goal is to achieve the desired command in two
sample periods. The sample frequency is 2 Hz and the physical system is given as
1
Gs
s2
To design the system we rst need to convert the continuous system into an equiva-
lent discrete representation. The system transfer function after including the ZOH is
z 1 1 z 1 T 2 zz 1 T 2 z 1
Gz Z 3
z s z 2 z 13 2 z 12
Substituting in the sampling rate of 2 Hz (T 0:5):
z1
Gz 0:125
z 12
Now we can close the loop and derive the closed loop transfer function which can
then be set equal to our desired response of Tz z2 . This is the same as expressing
our input and output as ck rk 2) or that our output should be equal to the
input after two sample periods. Dz, our controller, becomes our only unknown in
the expression
1 z1
Dz
Cz 8 z 12
Tz z2
Rz 1 z1
1 Dz
8 z 12
This expression can now be solved for Dz:
Uz 8z2 16z1 8
Dz
Ez z3 z2 z 1
Uz 8z1 16z2 8z3
Dz
Ez 1 z1 z2 z3
Dz can easily be converted to dierence equations for implementation. It is com-
putationally realizable since the lowest power of z1 occurs in the denominator.
390 Chapter 9
EXAMPLE 9.8
Use Matlab to verify the deadbeat controller designed in Example 9.7. When the
controller and control loop is added, we can represent the system with the block
diagram in Figure 15. In Matlab we will dene the physical system, apply the ZOH,
and convert it into the z-domain where it can be combined with the controller Dz
and the loop closed. Both a step and ramp response can be calculated and plotted.
%Program commands to direct design digital controller in z-domain
clear;
T=0.5;
nump=1; %Forward loop system numerator
denp=[1 0 0]; %Forward loop system denominator
sysp=tf(nump,denp); %System transfer function in forward loop
sysz=c2d(sysp,T,zoh)
numzc=[8 -16 8]; %Numerator of controller
denzc=[1 1 -1 -1]; %Denominator of controller
syszc1=tf(numzc,denzc,T);
%Close the loop and convert to z between samplers
cltfz=feedback(syszc1*sysz,1)
%Verify the discrete step and ramp response plots
step(cltfz,4);
gure;
lsim(cltfz,[0:0.5:4]);
After these commands are executed, we get the step and ramp responses shown in
Figures 16 and 17, respectively. It is very important to remember that these plots
only tell us that the system is at the desired location during the actual sample time,
two samples after the command has been given. The system may be oscillating in
between the sample periods. Additionally, it is necessary to calculate the power
requirements for this system to achieve the responses in Figures 16 and 17. The
power requirements increase as we make our desired response faster. It is primarily
the ampliers and actuators acting on the physical system that become cost prohi-
bitive under unrealistic performance requirements.
EXAMPLE 9.9
Use Matlab to design a controller for the system given in Figure 18. The response
should approximate a rst-order system and take four sample periods to reach the
command. The sample frequency for the controller is 5 Hz, T 0:2 sec. Since we
Figure 15 Example: system block diagram for discrete direct response controller design.
Digital Control System Design 391
Figure 18 Example: system block diagram for discrete direct response controller design.
392 Chapter 9
want the system to approximate a rst-order system response, we can dene our
desired response to be based on the standard rst-order unit step response values:
Cz 0 0:63z1 0:235z2 0:085z3 0:05z4 Rz
Assuming that our system components are capable of the desired response and that
our model is accurate, this should result in the following output value at each
sample:
Sample 1 c1 0
Sample 2 c2 0:63
Sample 3 c3 0:865 (add the previous to the new, 0:63 0:235)
Sample 4 c4 0:95 (add the previous to the new, 0:865 0:085
Sample 5 c5 1:0 (add the previous to the new, 0.95 + 0.05)
We should recognize this as the normalized rst-order system in response to a unit
step input and have a system time constant equal to one sample period. To derive the
transfer function representation we can solve for C=R and multiply the numerator
and denominator by z4 .
Cz 0:63z3 0:235z2 0:085z 0:05
Tz
Rz z4
To facilitate the use of the computer we can solve directly for our controller Dz in
terms of our system transfer function Gz and our desired response transfer function
Tz. The desired response transfer function is set equal to the closed loop transfer
function, as dened earlier:
Cz DzGz
Tz
Rz 1 DzGz
For this example with unity feedback we can now solve directly for Dz:
Tz
Dz
Gz
1 Tz
This allows us to now use Matlab to convert our analog system into its discrete
equivalent, dene our desired response Tz, and subsequently to solve for our
controller Dz. We can verify our design by closing the loop and plotting the unit
step input response of the system. The Matlab commands use are given as
%Program commands to design digital controller in z-domain
clear;
T=0.2;
nump=8; %Forward loop system numerator
denp=[1 4 16]; %Forward loop system denominator
sysp=tf(nump,denp); %System transfer function in forward loop
sysz=c2d(sysp,T,zoh)
sysTz=tf([0 0.63 0.235 0.085 0.05],[1 0 0 0 0],T)
Dz=sysTz/((1-sysTz)*sysz)
sysclz=feedback(Dz*sysz,1);
step(sysclz,2)
After dening the sample time and continuous system transfer function we use
Matlab to calculate the discrete equivalent, with the ZOH, which is returned as
Digital Control System Design 393
0:1185z 0:09037
sysz Gz
z2 1:032z 0:4493
The controller transfer function, Dz, is then calculated and expressed as
Uz
Dz
Ez
0:63z5 0:4149z4 0:1257z3 0:06791z2 0:01338z 0:02247
0:1185z5 0:0157z4 0:08478z3 0:03131z2 0:01361z 0:004518
This is realizable since no future knowledge of our system is required and we can
express our controller as a dierence equation that can be implemented digitally.
Finally, the loop can be closed, including now our controller Dz, and the unit step
response of the system plotted as shown in Figure 19.
It is easy to see that we have reached our desired output values at the corre-
sponding sample times and that the response of the system approximates the
response of a rst-order system with a time constant of one sample period.
Remember that since this is a simulation the model used to develop the controller
and the model used when simulating the response are identical and therefore the
results behave exactly as designed for. In reality it is dicult to accurately model the
complete system, especially with linear models as constrained to with the z-domain,
and our results need to be evaluated in the presence of modeling errors and distur-
bances on the system.
As this chapter demonstrates, many of the same design techniques (conversion
and root locus) that were developed for analog systems can be used for digital
systems. The primary dierence is the addition of the sample eects and how it
also modies the response and stability characteristics of our system. If the sample
frequency is fast relative to our physical system, we can even use controllers designed
in the s-domain with good results. As our sample time becomes an issue, however,
designing directly in the z-domain is advantageous since we account for the sample
time at the onset of the design process. Finally, and for which there is no analog
counterpart, we can choose the response that we wish to have and solve for the
controller that gives us the response. Although a straightforward process analyti-
cally, this method relies on us understanding the physics of our system to the extent
that we can choose realistic goals and design specications. Unrealistic specications
may lead to unsatisfactory performance and/or high implementation costs.
Finally, with all controllers designed here we should also verify our disturbance
rejection properties. A good response due to a command input does not guarantee a
good rejection of a disturbance input. Since the disturbance usually acts directly on
the physical system, it is dicult to analytically model unless we predene the dis-
turbance input sequence to enable us to perform the z transform. Tools like
Simulink, the graphical block diagram interface of Matlab, become useful at this
point since we can include continuous, discrete, and nonlinear blocks in one block
diagram. This is demonstrated in Chapter 12 when the nonlinearities of dierent
electrohydraulic components are included in our models.
9.6 PROBLEMS
9.1 What should the sample time be when converting existing analog controllers
into discrete equivalents and expecting to achieve similar performance?
9.2 Zeigler-Nichols tuning methods are applicable to discrete PID controllers.
(T or F)
9.3 What is the goal of bumpless transfer?
9.4 What are the advantages of expressing our dierence equations as velocity
algorithms?
9.5 Write the velocity algorithm form of a dierence equation for the PI-D
controller. Let u be the controller output and e the controller input (error).
9.6 For the following phase lag controller in s, approximate the same controller in
z using the pole-zero matching method. Assume a sample time of 1=2 sec.
0:25s 1
Gc s
s1
s1
Gc s
0:1s 1
9.9 For the phase-lead controller in Figure 21 and using a sample time, T 0:5s,
a. Find the equivalent discrete controller using pole-zero matching.
b. Find the equivalent discrete controller using Tustins approximation.
c. Use Matlab to plot the step response of each discrete controller and com-
ment on the similarities and dierences.
9.10 For the system in Figure 22, tune the phase-lag digital controller directly in the
z-domain using root locus techniques. Draw and label the root locus plot and solve
for the gain K that results
pin repeated real roots (point where loci leave real axis).
The sample time is T 2 sec.
9.11 For the system in Figure 23, develop the z-domain root locus when the sample
time is T 0:5 sec.
a. Draw and label the root locus plot.
b. Describe the range of response characteristics that can be achieved by
varying the proportional gain.
9.12 For the plant model transfer function, design a unity feedback control system
using a PI controller (K 2, Ti 1). Draw the block diagrams for both the con-
tinuous system (see Problem 5.14) and the equivalent discrete implementation. Use
396 Chapter 9
9.14 With the third-order plant model and unity feedback control loop in Figure 25,
and using a sample time T 0:8 sec, use Matlab to
a. Design a discrete controller that exhibits no overshoot and the fastest
possible response when subjected to a unit step input.
b. Write the dierence equation for the controller. Be sure that it is realizable.
c. Verify the root locus and step response plots (compensated and uncompen-
sated, i.e., Dz 1 using Matlab.
9.15 For the system shown in the block diagram in Figure 26, design a discrete
compensator that
a. Places the closed loop poles at approximately z 0:25 0:25j using a
sample time of T 0:04 sec. Design the simplest discrete controller possible
and derive both the required gain and compensator pole and/or zero loca-
tions.
b. Leaving the controller values the same, change the sample time to T 1 sec
and again plot the discrete root locus. Comment on the dierences.
c. Verify both designs with a unit step input response plot using Matlab.
Comment on how well this correlates with parts a and b.
9.16 For the system shown in the block diagram in Figure 27, design a discrete
compensator that achieves a deadbeat response in one sample. The sample period is
T 0:2s.
9.17 For the system shown in the block diagram in Figure 27, design a discrete
compensator that achieves a deadbeat response over two sample periods. The sample
period is T 0:1s.
9.18 For the system shown in the block diagram in Figure 28, design a discrete
compensator that achieves an approximate ramp response to the desired value over
four sample periods. The sample period is T 0:4s.
9.19 For the system shown in the block diagram in Figure 28, design a discrete
compensator that achieves deadbeat response over one sample period. The sample
period is T 0:6s:
a. Use Matlab to solve for the controller transfer function.
b. Verify the unit step response of the closed loop feedback system.
c. Build the block diagram in Simulink, connect a scope to the output of
the controller block, and comment on whether there are any inter-sample
oscillations.
9.20 For the system shown in the block diagram in Figure 29, design a discrete
compensator that achieves deadbeat response over three sample periods. On the
second sample the system should be 65% of the way to the desired value and
100% of the desired value on the third sample. The sample period is T 1s:
a. Use Matlab to solve for the controller transfer function.
b. Verify the unit step response of the closed loop feedback system.
c. Build the block diagram in Simulink, connect a scope to the output of
the controller block, and comment on whether there are any inter-sample
oscillations.
10.1 OBJECTIVES
Examine the dierent interfaces between analog and digital signals.
Learn the common methods of implementing digital controllers.
Examine strengths and weakness of dierent implementation methods.
Present basic programming concepts for microprocessors and PLCs.
Identify the components available when digital signals are used.
10.2 INTRODUCTION
With the rate at which computers are developing, it is with a measure of caution that
this chapter is included. The goal is not to predict the future of digital controllers,
benchmark past progress, or provide a comprehensive guide to the available hard-
ware and software. Rather, the goal is to provide an overview regarding the types of
digital controllers in use, how they typically are used, and how the designs from the
previous chapter are commonly implemented and allowed to become useful. The
basic components have somewhat stabilized for the present and the noticeable devel-
opments occurring in terms of speed, processing power, and communication stan-
dards. It is safe to say that the inuence of digital controllers will continue to
increase.
In this chapter, the three broad categories of digital controllers are presented as
computer based, microcontrollers, and programmable logic controllers (PLCs). In
general, each type relies ultimately on the common microprocessor. The strengths
and weaknesses of each type are examined from a hardware and software perspec-
tive. The common digital transducers and actuators used in implementing digital
controllers are also presented. Finally, the problem of interfacing low level digital
signals to real actuators is examined. An example of this is the common method
using pulse width modulation (PWM).
10.3 COMPUTERS
Computers have become commonplace when implementing digital controllers. In
many situations it is the cheapest and most exible option, and yet computers
399
400 Chapter 10
have struggled in industrial settings. While reliability is the primary reason, it is not
the reliability of the hardware half as much as it is of the software. Current operating
systems are just not robust enough to run continuously in a control environment and
provide the level of safety required. To be fair, they are designed to run all tasks
satisfactory, not designed to run one task reliably. In industrial environments, PLCs
are still dominant and are discussed in more detail in a subsequent section.
With the continuing increase in performance in the midst of decreasing prices,
it is envisioned that more and more PC-based applications will be developed. The PC
has many advantages once the reliability issues are resolved. It is extremely easy to
upgrade and is exible in adding capabilities as time progresses. As compared to
microcontrollers and PLCs, there are many straightforward programming packages,
some with a purely graphical interface (more computer overhead). Microcontrollers
run best when programmed with assembly language, and PLCs generally have an
application where the program is written using ladder logic and downloaded to the
chip. Computers, on the other hand, allow many dierent computer languages,
compilers, programs, etc., to interface with the outside data input/output (IO)
ports. Since the PC processor is performing all the control tasks in real time, slider
bars, pop-up windows, etc., can be used to tune the controller on the y and imme-
diately see the eects. In addition, with the large hard drives, cheap memory
upgrades, etc., the PC can collect data on the y and store it for long periods of
time. Hundreds of dierent control algorithms could be stored for testing or even
switching between during operation. Microcontrollers and PLCs have limitations on
the number of instructions, byte lengths, words, and often use integer arithmetic,
requiring good programming skills. It should be clear that at this point in time, the
PC-based controller is a great choice for developing processes, research, and testing
but is not as suited for continuous industrial control or where the volumes are high,
as in OEM (original equipment manufactured) controllers. What we are seeing, and
will continue to see, is the lines between the two systems becoming more blurred.
New bus architectures and interface cards are allowing computer processors to act as
programmable controllers in industrial settings. Now we will examine some of the
required components to allow us to use our PC as control headquarters. Figure 1
illustrates the common components used in PC-based control.
It is obvious at this point that for our computer to control something, it must
be capable of inputs and outputs which can interface with physical components. This
is really the heart of the matter. The computer (any processor) is designed to work
with low level logic signals using essentially zero current. Physical components, on
the other hand, operate best when supplied with ample power. The goal then
becomes developing interfaces between low level processor signals and the high
power levels required to eect changes in the physical system. Various computer
interfaces can be purchased and used, each with dierent strengths and weaknesses.
An additional problem is the quantization of analog signals describing physical
phenomenon (i.e., the temperature outside is not xed at 30 and 50 degrees but
may innitely vary between those two points and beyond). Converters, examined
below, are used to convert between these two signal types, but with limited resolu-
tion. Finally, both isolation (computer chips are not fond of high voltages and
currents) and power amplication devices are required to actually get the computer
to control a physical system. These last items are the same concerns all microcon-
trollers and PLCs share and are overcome using similar methods and components.
Digital Control System Components 401
Association (PCMCIA) or Universal Serial Bus (USB) ports must be used. Much
greater data throughputs are possible, as shown in Table 1.
The PCMCIA standards have various levels and not all are as high as
100 megabits/sec. In addition, the newer cards are not compatible in the older
slots, and these cards are sometimes dicult to set up. USB devices are now common
and are capable of plug-n-play operation, supplying enough power to sometimes run
connected devices, daisy chains of up to 128 components, and compatible with all
computers incorporating such a port (i.e., IBM compatible, Apple, Unix, etc.).
Desktop systems (within the PC category) are usually the best value if port-
ability and cutting edge performance (DSP) are not required. Boards using the ISA
(original Industry Standard Architecture) are common as seen by the many compa-
nies producing such boards. Price ranges from several hundred dollars to over
several thousand dollars, depending on the features required. Peripheral component
interconnect (PCI), the latest mainstream bus architecture and faster and friendlier
than industry standard architecture (ISA), is also becoming popular with interface
card manufacturers. PCI is equivalent to having DMA (direct memory access, allows
the board to seize control of the PCs data bus and transfer data directly into
memory) with an ISA card. Both bus systems provide more than enough data
throughput to keep up with the converters on the data acquisition card. It is also
possible to install multiple boards in one computer for adding additional capabilities.
Systems have been constructed with hundreds of inputs and outputs.
If extremely high speeds (MHz sampling rates) along with many channels are
required, then dedicated digital signal processors are required. They generally consist
of their own dedicated high-speed processor (or multiple ones) and conversion chips
and only receives supervisory commands from the host PC. Costs are generally much
greater the typical PC systems installed with interface boards and hence limited to
special applications.
10.3.1.2 Data Acquisition Boards
In this section we dene the characteristics common to the hardware data acquisition
boards used in the various systems listed in the previous section. Data acquisition
boards are very common and used extensively in interfacing the PC with analog and
digital signals. There are many specialty boards with dierent inputs and outputs
Serial 0.01
Parallel 0.115
USB 1.5
SCSI-1 and 2 510
(Wide) ultra SCSI 2040
IEEE 1394 rewire 12.550
PCMCIA Up to 100
Digital Control System Components 403
that, although similar, are not discussed here. Table 2 lists the common features to
compare when deciding on which board to use.
The common bus architectures have already been examined in Table 1. In
general, speed and resolution are usually proportional to cost (when other features
are similar). The speed is generally listed as samples/second and if the board is
multiplexed, the speed must be divided by the number of channels being sampled
to obtain the per channel sample rate. We will currently hold o on resolution and
software and discuss them in more detail in subsequent paragraphs.
When dealing with any inputs or outputs, the voltage (or current) ranges must
be compatible with the system you are trying to control, namely the transducers and
actuator voltage levels. Many PC-based data acquisition boards are either software
or hardware selectable and therefore exible in choosing a signal range that will
work. However, many boards select one range that will then apply to all channels.
The discussion on resolution will illustrate some potential problems with this.
Common ranges are 01 V, 05 V, 010 V, 5 V, 10 V, and 420 mA current
signals.
The number of inputs and outputs range with a larger number of digital IO
ports generally available. Common boards include up to 16 single-ended analog
inputs and thus 8 double-ended inputs. With double-ended inputs a separate ground
is required for each channel since the board only references the dierential voltage
between two channels. This has many advantages when operating in noisy environ-
ments but usually requires two converters for one signal. Single-ended inputs have
one common ground and each AD (analog-to-digital) converter channel references
this ground.
Some boards will include counter channels that can be congured to count
pulses or read a pulse train as a frequency. In addition, various output capabilities
are found beside analog voltages. Four-to-20-mA outputs are becoming more com-
mon, PWM outputs, and stepper motor drivers can all be found. It is generally
cheaper with PWM or stepper motor outputs since digital ports are used and a
DA (digital-to-analog) converter circuit is not required. Finally, extras to consider
are current ratings, warranties, linearity, and protection ratings. For most controller
applications, the linearity is not a major issue, being much better than the compo-
nents connected to. Current ratings, in general, are small and assume that we always
will have to provide some amplication before a signal can be used. The over-voltage
protection ratings are important if you do not include isolation circuits and expect to
operate in a noisy environment. Input and output impedance is also available from
many board manufacturers.
Vmax Vmin
Vquantization
2n 1
The signicance of the quantization error can be shown through a simple example.
EXAMPLE 10.1
Determine the quantization voltage (possible error) when a voltage signal is
converted into an 8-bit digital signal and the range of the AD converter is 010 V
and 10 V.
Using our system with a 8-bit converter for a range of 010 V, our signal is
represented by 256 individual levels (00000000 through 11111111 in binary). We can
calculate the quantization voltage as
Vmax Vmin 10 V 0 V
Vquantization 39 mV
2n 1 28 1
Thus the smallest increment is 39 mV. If we congure the computer to accept 10 V
signals, we now have twice the voltage range to be represented by the same number
of discrete values.
Vmax Vmin 10V 10V
Vquantization 78 mV
2n 1 28 1
Now our voltage level must increase or decrease by almost 0.1 V before we see the
binary representation change.
The practical side is this: When we design systems it is best to choose all signals
to use the full range of the AD converter to maximize resolution, unless of course,
each channel on the board can be congured separately. If some sensors already
have 10 V outputs, then choosing one with a 0 to 1 V output severely limits the
resolution, since we are only using 1/20th of the range that is already limited to a
nite number of discrete values. Obviously, as resolution is increased this becomes
less of a concern, but nonetheless good design practices should be followed.
10.3.1.3 Software
Finally, one of the major considerations when using data acquisition boards is soft-
ware support. If an investment has already been made into one specic software
package, then the decisions are probably narrowed down. Most vendors supply
proprietary software packages and little has been done to generate a standard pro-
tocol. Many nontraditional software programs, for an extra fee, are beginning to
develop drivers allowing data acquisition boards to interface with them. Matlab is
one such example and allows us through the toolboxes in Matlab to run hardware in
the loop controllers.
There are two main branches to consider when choosing software for imple-
menting digital controllers on PCs. Graphical-based programs are easy to use and
program, but the graphical overhead limits the throughput when operating control-
lers real time. If the hardware and software support DMA (direct memory access),
then batches of data can be acquired at very high sample rates and post processed
before being displayed or saved. This technique, while great for capturing transients,
has little benet when operating PC based controller in real time where the signal is
acquired once, processed, and the controller output returned to the outside system. If
learning to program is something we do not wish to do, then graphical-based pro-
grams are the primary option and work ne for slower processes incorporating less
advanced controllers. For many slower systems and for recording data these soft-
ware packages work well.
To take full advantage of the PC as our controller, we must be able to program
in a language using a compiler. There are hybrids were the initial design is done using
a graphical-based interface and then the software program is able to compile an
executable program from the graphical model. Sample rates are much faster and
access to unlimited controller algorithms should be all the incentive needed to learn
programming. Programming is greatly simplied when using predened commands
(drivers) supplied by the manufacturer of the boards. Most boards come with a basic
set of drivers and instructions for the common programming languages like C,
Pascal, Basic, and Fortran.
A brief overview of two possible programming methods is given here. Since
processors typically perform operations much faster than attached input/output
406 Chapter 10
devices, synchronization needs to occur between dierent devices. Since the proces-
sor normally waits to execute the next instruction, something must tell the processor
is ready to accept data or perform the next execution. The two common options are
to either have the program control the delays or use interrupts to control the delays.
Program control is the simplest and can easily be explain with an illustration of
inserting a loop in your program. While the loop is running, every time it gets back
to the command to read or write from/to the data acquisition card, it retrieves the
data and continues to the next line of code. In one aspect, this ensures that program
is always operating at the maximum sampling frequency, but it also means you do
not have direct control over the sampling frequency. For example, suppose a con-
dition in one the loops might cause a branch in the program to occur, like only
updating the screen every 50 loops. Then obviously the sample time for this loop will
be longer. Also, if the program tries to write to the hard drive and it is being used,
longer delays can be expected. Programming in a Windows environment exacerbates
this since multitasking is expected to occur.
A second method to control the delays is through interrupt driven program-
ming. An interrupt is exactly what the name suggests, a method to halt the current
operation of the processor and perform your priority task. The advantage with using
interrupts is that very consistent sample times are achievable. We base the interrupt
o of an external (independent) clock signal and tell the program how often to
interrupt the process to collect or write data. The catch is this, while this technique
sounds great, a problem arises if the computer has not yet nished processing the
command as required before the next interrupt occurs. In this event the memory
location storing the controller output might not be updated and the old value is sent
again to the system; we may wish to monitor this and alert the user if our program
misses a set number of code instructions, especially when multiple devices may
send interrupt signals and further tie up the processor. All in all, interrupt program-
ming, when done correctly, is more ecient because it controls the amount of time
spent waiting for other processes, most of which (i.e., moving a mouse or sound
eects) can wait until the processor actually does have time without limiting the
performance of the controller routine.
Examples of both methods can be found and much of the discussion above
applies also to microcontrollers and PLCs, as we will now see.
10.4 MICROCONTROLLERS
Microcontrollers are now used in a surprising number of products: microwaves,
automobiles (multiple), TVs, VCRs, remote controls, stereo systems, laser printers,
cameras, etc. Many of the same terms dened for PC-based control systems will
apply when talking about microcontrollers and PLCs.
First, what is a microcontroller, and why not call it a microprocessor? A
microcontroller, by denition, contains a microprocessor but also includes the
memory and various IO arrangements on a single chip. A microprocessor may be
just the CPU (central processing unit) or may include other components.
Microcomputers are microprocessors that include other components but not neces-
sarily all components required to function as a microcontroller. For the most part,
when we discuss microcontrollers we tend to interchange the two terms simulta-
neously. Certainly, when discussing microcontrollers, we have in mind the complete
Digital Control System Components 407
electrical package (processor, memory, and IO) capable of controlling our system, an
example of which is shown in Figure 2.
Microcontrollers have much in common with our desktop PCs. They both have
a CPU that executes programs, memory to store variables, and IO devices. While the
desktop PC is a general purpose computer, designed to run thousands of programs, a
microcontroller is design to run one type of program well. Microcontrollers are
generally embedded inside another device (i.e., automobile) and are sometimes called
embedded controllers. They generally store the program in read-only memory (ROM)
and include some random access memory (RAM) for storing the temporary data
while processing it. The ROM contents are retained when power is o, while the
RAM contents are lost. Microcontrollers generally incorporate a special type of
ROM called erasable-programmable ROM (EPROM) or electrically erasable-
programmable ROM (EEPROM). EPROM can be erased using ultraviolet light
passing through a transparent window on the chip, shown in Figure 3. EEPROM
can be erased without the need for the ultraviolet light using similar techniques used
to program it; there is usually a limited number of read/write cycles with most
EEPROM memory chips.
Microcontroller power consumption can be less than 50 mW, thus making
possible battery-powered operation. LCD are often used with microcontrollers to
provide means for output, but at the expense of battery life.
Microcontrollers range from simple 8-bit microprocessors containing 1000
bytes of ROM, 20 bytes of RAM, 8 IO pins, and costing only pennies (in quantity)
to microprocessors with 64-bit buses and large memory capacities. Today even home
hobbyists can purchase microcontrollers (programmable interface controllers, etc.),
which can be programmed using a simplied version of BASIC and a home com-
puter. A BASIC stamp is a microcontroller customized to understand the BASIC
programming language. Popular microcontrollers we might encounter include
Motorolas 68HC11, Intels 8096, and National Semiconductors HPC16040. The
Motorola, for example, comes in several versions, with the MC68HC811E2 contain-
ing an 8-bit processor, 30 I/O bits, and 128 kilobytes of RAM. Since there are so
many variations within each manufacturers families of models and since the tech-
nology is changing so fast, little space is given here in discussing the details of specic
models. If we understand some of the terminology, as presented here, then we should
be able to gather information from the manufacturer and choose the correct micro-
controller for our system. Now let us examine and dene some useful terms to help
us when we are designing control systems using embedded microcontrollers.
To interface the microprocessor with the memory and IO ports, address buses
are used to send words between the devices. Words are groups of bits whose length
depend on the width of the data path and thus aect the amount of information sent
each time. An 8-bit microcontroller sends eight lines of data that can represent 256
values. Common word lengths are 4, 8, 16, and 32. Four-bit microcontrollers are still
being used in simple applications with 8-bit microcontrollers being the most com-
mon. The other factor aecting the microcontroller performance is the clock speed.
This is probably the most familiar performance specication through our exposure
to PCs and the emphasis on processor clock speeds. It is important to know both the
bus width (amount of information sent each clock cycle) and the processor clock
speed since they both directly inuence the overall performance.
Finally, we discuss the programming considerations. Microcontrollers ulti-
mately need an appropriate instruction set to perform a specic action. Instruction
sets are dependent on the microprocessor being used and thus specic commands
must be learned for dierent microprocessors. Microprocessors work in binary code,
and the instruction sets must be given to the processors in binary format.
Fortunately, short-hand codes are used to represent the binary 0s and 1s. A common
shorthand code is assembly language. Since computer programs (assembler pro-
grams) are available to covert the assembly code into binary, the binary code is
not such a large obstacle to designing microcontrollers.
A third level of programming, and the most useful to the control system
designer, is the use of many high-level computer languages to compile algorithms
into assembly and machine code. Common high-level languages include BASIC, C,
FORTRAN, and PASCAL. There is enough similarity between languages that once
you have programmed in one, you will understand much of another. Since only the
syntax changes and not the program ow chart, learning the language is one of the
easier aspects to developing a good control algorithm. Most engineers, once the ow
chart (logic) of the program is developed, can learn the language specic commands
required to implement to controller.
The one disadvantage of programming in high-level languages is speed. Even
when converted to assembly language, they tend to result in larger programs that
take longer to run than programs originally written in assembly. This gap is narrow-
ing and some compilers convert to assembly code quite eciently.
Digital Control System Components 409
disc. What may or may not be included are extra inputs and outputs, software
drivers, displays and user interfaces, etc. Therefore, it is a good idea when choosing
a PLC (or a computer-based system or microcontroller) to compare features based
on what is included and then what the prices are for additional features. Factory
support policies should also be considered, although a companys reputation for
providing it is probably more important.
program lines with the new inputs and perform the desired operations, with the cycle
completed by scanning the outputs (writing the outputs). This is similar to program
control over processor delay times dened earlier. Where todays PLCs confuse the
matter is by combining traditional ladder logic with text based programming based
on interrupts. As we will see, with some PLCs a user routine can simply be inserted
on a ladder rung and used with an interrupt generator (pulse timer). This section
provides a brief overview to get us started with ladder logic programming. Most
programs can be written using a basic set of commands.
The primary components used in constructing ladder diagrams are rails, rungs,
branches, input contacts, output devices, timers, and counters. Most programs can
be constructed using these simple components. Although each manufacturer has
dierent nomenclature for text-based programming (i.e., mnemonic for an input
contact), the resulting ladder diagram is fairly standard and easy to understand.
Some programs allow the program to be graphically developed directly in the ladder
framework. In addition, many PLCs now allow special functions to be written using
high level programming languages, like BASIC. Let us look at the primary compo-
nents.
The rails are vertical lines representing the power lines with the rungs being the
horizontal lines connecting the two power lines (in and out). Thus, when the proper
inputs are seen, the input contact closes and energizes the output by connecting the
two vertical rails across the load. The rungs contain the inputs, branches (if any), and
outputs (coils or functions). Most programs use the basic commands and symbols,
listed in Table 5.
Each contact switch, normally open or closed, may represent a physical input
or condition from elsewhere in the program. In addition to a true or false condition,
each contact switch may also represent a separate function including timers (inter-
rupts), counters, ags, and sequencers. Dierent manufacturers usually include addi-
tional functions that can be assigned to contacts. What follows is a brief overview of
common instructions applied to contacts.
The most common function of a contact switch is to scan a physical input and
show its result. Thus, when a normally open switch assigned to digital input channel
1 scans a high signal, or input, the contact switch closes signifying the event has
occurred. It might be someone pushing a start button or the completion of another
event signied by the closing of a physical limit switch. This is the most common use
of contact switches. The normally closed switch will not open until an event has
occurred. Contact switches may be external, representing a physical input, or inter-
nal, representing an internal event in other parts of the ladder diagram. Herein lies a
primary advantage of PLCs: All the internal elements can be changed without phy-
sically rewiring the circuits. The internal relays can be congured to act as Boolean
logic operators without physical wiring. The same idea holds true for outputs, or
relays. The term relay, or coil, is derived from the fact that early PLCs energized a
coil to activate the relay that then supplied power to the desired physical component/
actuator. In modern, electronic PLCs, the coils may also be internal or external.
Special bit instructions may also be used to open or close switches. Timers
generally delay the on time by a set amount of time. Delay on times can then be
constructed in the ladder diagram to act as delay o timers. Counters may be used to
keep track of occurrences and switch contact positions after a number has been
reached. Counters can generally be congured as up or down counters, and many
include methods for resetting them if other events occur. Some PLCs also include
functions to shift the bit registers, sequencers, and data output commands.
Sequencers can be used to program in xed sequences, such as a sequence of motions
to load a conveyor belt, or to drive devices like stepper motors.
Let us quickly design the ladder diagram for creating a latching switch that will
turn an output on when a momentary switch is pressed and turn it o when another
momentary switch is pressed. To begin the ladder circuit, we need to dene two
inputs, one output, and a switch dependent on the output relay. One input is nor-
mally open and the other normally closed. The relay may or may not be a physical
relay and may only be used internally. In this case we will choose a marker relay
(does not physically exist outside of the PLC) and use it to mark an event. We are
now ready to construct the ladder diagram given in Figure 5.
This is a circuit commonly used to start and stop a program without requiring
a constant input signal. When the START button is pushed (digital input port
receives a signal), the switch closes and since STOP is normally closed, the relay
RUN is energized. When the RUN relay is energized, it also closes the switch
monitoring its state, and thus the circuit remains active even after the START switch
is released. To stop the circuits, press the STOP (another digital input port signal)
button, which momentarily opens the STOP switch. This deactivates the relay that in
turn causes the RUN switch to open. Now when STOP is released (it closes again),
the circuit is still deactivated. In normal operation the remaining rungs would each
contain a function to be executed each pass. The scan time then refers to the time
required to complete all the rungs of programs and return back to the beginning
rung again.
In PLCs allowing for subroutines written in a high-level programming lan-
guage, it becomes straightforward to implement our dierence equations, which
themselves were derived from the controllers designed in the preceding chapter.
More general concepts relating to implementing dierent algorithms are discussed
in the next section, but we can mention several items relating to implementing
algorithms within the scope of ladder logic programming methods.
Within the ladder logic framework we are generally provided with two basic
options for implementing subroutines containing our controller algorithms. If the
PLCs set of instructions allows us to insert a timer (interrupt) component, then we
can simply insert the timer on a rung followed by the subroutine (function) contain-
ing our algorithms. Every time the timer generates an interrupt, the subroutine is
executed. What takes place in the subroutine is discussed more in the next section.
This method allows us to operate our controller at a xed sample frequency regard-
less of where the PLC is currently at executing other rungs in the ladder logic
program. Most PLCs attempt to service all interrupts before progressing to the
next ladder rung. However, if we ask too much (i.e., several interrupt driven timers),
the controller will not operate at our desired frequency and will miss samples.
A second basic method is to leave the function in the normal progression of
ladder rungs where the controller algorithm is executed once per pass through all of
the rungs. This is similar to our program control method discussed earlier. In this
case we are not guaranteed a xed sample period and the sample period may change
signicantly depending on what inputs are received, causing to the ladder logic
program to run more or less commands each pass.
increase the line separating the two becomes blurred. CAN-based buses (CANopen,
Probus, and DeviceNet) are examples of lower level buses, while Ethernet and
Firewire are more characteristic of higher level buses.
The largest problem is understanding what, if any, standards are enforced for
the dierent buses. This has led to both open and proprietary architectures.
Common open communication networks include Ethernet, ControlNet, and
DeviceNet. Using one of these usually means that we can buy some components
from company A and other components from company B and expect to have them
work together. Independent organizations are formed to maintain these standards.
The ip side includes proprietary architectures. Many manufacturers, in addi-
tion to supporting some open standards, also have specialized standards not avail-
able to the public and designed to work with their products (and partner companies)
only. This has the advantage of being able to optimize the companys products and
being better able to support the product. The obvious disadvantage is that we can no
longer interface with devices from other companies.
There are many alternative bus architectures that are beyond the scope of this
text, many of which are specic to certain applications. STD, VME, Multibus, and
PC-104 are examples. One becoming more common is the PC-104, embedded-PC
standard, being used in embedded control applications with a variety of modules
available. As with the common PC, these network devices and protocols are con-
stantly changing and upgraded.
Finally, the third and often overlooked element is the user interface. In addi-
tion to the network bus and protocol, the user interface should be considered in
terms of the learning curve, exibility, and capabilities.
Steps Action
whereas step 5 is the actual dierence equation, step 7 is required for many algo-
rithms and consists of saving the current error and controller values for use during
the next time the algorithm is called (i.e., uk 1, ek 1, ek 2 . . ., depending
on the algorithm).
The control algorithms in particular, developed as dierence equations, will
take an input that was read in by previous program steps, perform some combina-
tion of mathematical functions on the input, possibly in relationship to previous
values or with respect to multiple inputs, and then proceed to send the value to the
appropriate output. In this way, we can change from a PID to a PI to an adaptive to
a fuzzy logic controller using the same physical hardware simply by downloading a
new algorithm (steps 5 and 7) to the programmable microprocessor. PCs, embedded
microcontrollers, and PLCs (with programming capabilities) are capable of this type
of operation.
Finally, remember that Table 6 is a general outline and many more lines of
code are required to deal with all the practical issues that are bound to arise (e.g.,
emergency shutdown routines, data logging, maximums and minimums to prevent
problems like integral windup, etc). These extras are often fairly specic to the
system being controlled and are based on an experts knowledge of the system limits
and characteristics.
Finally, this whole idea can be applied in one or two linear dimensions where a
grid is set up with light sources (commonly LEDs) and the X-Y position for an
object can be determined.
The encoders used for velocity are identical to those describe above but now
the frequency of the pulse is desired. Some boards directly accept frequency inputs
while others must be programmed to count a specic number of pulses and divide by
the elapsed time. There is a trade-o between response and accuracy since measuring
the frequency using the distance between one set of pulses is very quick but prone to
have very large errors. Averaging more pulses decreases the error but increases the
response time and corresponding lag. When digital signals are transmitted instead of
analog signals, we also have the possibility of using ber optics instead of traditional
wire.
Figure 8 Schmitt trigger used to obtain square wave pulse train from oscillating signal.
and the rotor magnet will always try to align itself with the new magnetic poles.
Permanent magnet stepper motors are usually limited to around 500 oz-in of torque
while variable reluctance motors may go up to 2000 oz-in of torque. Permanent
magnet motors are generally smaller and as a result also capable of higher speeds.
Speeds are measured in steps per second and some permanent magnet motors range
up to 30,000 steps per second. Resolutions are measured in steps per revolution with
common values being 12, 24, 72, 144, 180, and 200, which ranges from 30 degrees/
step down to 1.8 degrees/step. Special circuitry allows some motors to half-step (hold
a middle position between two poles) or microstep, leading up to 10,000 steps per
revolution or more. The trade-o is between speed and resolution since for any given
conguration the steps per second remains fairly constant.
Variable reluctance stepper motors have a steel rotor that seeks to nd the
position of minimum reluctance. Figure 10 illustrates a simple variable reluctance
stepper motor. As mentioned, variable reluctance motors are generally larger in size
and slower than permanent magnet types but have the advantage in torque rating
over their counterparts.
To operate a stepper motor using open loop control (no position feedback), we
must compare our required actuation forces with the stepper motor capabilities.
These specications are presented in Table 7.
The holding torque is essentially zero when power is lost in variable reluctance
stepper motors. Since permanent magnet motors will stay aligned with the path of
least reluctance, there is always a holding torque even without power, called detent
torque, although it is much less than the holding torque with power on. Most stepper
motors will slightly overshoot each step since they are designed for maximum
response times. Variable reluctance stepper motors generally have lower rotor inertia
(no magnets) and thus may have a slightly faster dynamic response than comparably
sized permanent magnet stepper motors. The pull-in and pull-out parameters and
slew range are shown graphically in Figure 11. Typical of most electric motors, as the
required torque increases, the available speed decreases, with the opposite also being
true. Also, the pull-in torque is greater than the pull-out torque.
An example is given in Figure 12 of how a stepper motor can be used to
provide open loop control of engine throttle position. In this gure a stepper
motor is connected to the engine throttle linkage using Kevlar braided line. In the
laboratory setting this provided an easy way for the computer to control the position
Item Description
Holding torque The maximum torque that can be applied to a motor without causing
rotation. It is measured with the power on.
Pull-in rate Maximum stepping rate a motor can start at when loaded before losing
synchronization.
Pull-out rate Maximum stepping rate a motor can stop with when loaded, before
losing synchronization.
Pull-in torque Maximum torque a motor can be loaded to before losing
synchronization while starting at a designated stepping rate.
Pull-out torque Maximum torque a motor can be loaded to before losing
synchronization while stopping from a designated stepping rate.
Slew range Ranges of rates between the pull-in and pull-out rates where the motor
runs ne but cannot start or stop in this range without losing steps.
of the IC engine throttle without requiring the use of feedback. As long as the
required torques and desired stepping rates are always within the synchronized
range, the computer can keep track of throttle position by recording how many
pulses (and the direction) are sent to the motor. In addition to the specialized
application in this example, stepper motors are found in a variety of consumer
products, include many computer printers, machines, and stereo components.
rapidly with almost innite life if properly cooled. As we will see, this provides the
basis for PWM. However, since inductive loads tend to generate a back current when
switched on, we must include protection diodes when using solid-state (transistor)
relays with inductive loads. If the controller is especially sensitive to electrical noise
and spikes, we can also add optical isolators for further protection. Optical isolators
couple an LED with a photo detector so that the switch itself is completely isolated
physically from the high current signals. These are economical devices, easy to
implement, and common components in a variety of applications.
Without all the inner construction details and electrical phenomenon describ-
ing how it works internally, transistors (and diodes) are made from silicon materials
that either want to give up electrons (n-type) or receive electrons (p-type). Diodes
consist of just two slices (p and n) and act as one way current switches or precision
voltage regulators. When we connect three slices together, we get the common bipo-
lar transistor, acting as a switch (or amplier) that can be controlled with a much
smaller current. This allows us to take signals from components like microprocessors
and operational ampliers and amplify them to usable levels of power. The two basic
bipolar junction transistors, an npn and pnp are shown in Figure 13.
The basic operation is described as follows: a small current injected into the
base is able to control a much larger current owing between the collector and
emitter. The current amplication possible is the beta factor, dened in Figure 13.
Normal beta factors are around 100 for a single transistor. If we need higher ampli-
cation ratios, we can use Darlington transistors. Darlington transistors are two
transistors packaged together in series such that the beta ratios multiply and gains
greater than 100,000 are possible.
The transistor is capable of operating in two modes, switching (saturation) and
amplication. Linear amplication is much more dicult, and generally it is best to
use components designed as linear ampliers for your actuator. Heat, being the
primary destroyer of solid-state electronics, is a much larger problem with linear
ampliers. Switching is much easier, and as long as we keep the transistor saturated
while operating we should have fewer problems. To explain this fundamental con-
cept further, let us examine the cuto, active, and saturation regions for a common
emitter transistor circuit. Figure 14 illustrates the common curves illustrating these
regions. The manufacturers of such devices typically provide these curves.
The basic denitions are as follows:
IC is the current through the load (and thus also the current through the
transistor between the collector and emitter).
IB is the current supplied to the base from the driver and is used to control
the power delivered to the load.
VCE is the voltage drop across the transistor as measured between the
collector and emitter.
VBE is the voltage dierence between the base and emitter.
VCEsat is the voltage drop between the collector and emitter when the
transistor is operating in the saturated region.
If the transistor is not in saturation and actively regulating the current (linear
amplication), then VCE is may or may not be close to zero. Since electrical power
equals V I, the power (W) dissipated by the transistor is VCE IC . Therefore
during linear amplication in the active region, neither VCE nor IC are very close
to zero and heat buildup becomes a large problem. However, if we supply enough
base current, IB , and ensure that the transistor is saturated, then for most transistors
the voltage drop across the transistor, VCE , is less than 1 V and most of the power is
drawn across the load. This reduces the problem of heat buildup, the main source of
failure in transistors, and leads to the arguments in favor of methods like PWM. The
design task then is determining what the base current needs to be without supplying
so much that we build up heat from the input source. Example 10.2 demonstrates the
process of choosing the required base resistance value that will supply enough cur-
rent to keep the transistor operating in the saturated region. As manufacturers
curves show, when we choose the values for our calculations they are very dependent
on temperature.
During operation npn transistors are turned on by applying a high voltage (in
digital levels) to the base, while pnp transistors are turned on by applying a low
voltage (ground level) to the base. The connections between the load and transistor
are commonly termed to be common emitter, common collector, or common base.
In the common emitter connection the load is connected between the positive power
supply and the collector while the emitter is connected to the common (or ground)
signal. With the common collector connection the load is connected to the emitter. In
addition, npn transistors are cheaper to manufacture and thus the most common
solid state switches are congured with common emitter connections using npn
transistors.
Power gain is the greatest for the common emitter connection and thus it is the
type most commonly seen. Figure 15 illustrates the common emitter connection and
how the switching occurs when a base current is supplied to the npn transistor.
Digital Control System Components 425
Figure 15 Using a npn transistor to switch a load on and o (common emitter connection).
EXAMPLE 10.2
Referencing the common emitter circuit in Figure 15 and the typical characteristic
curves given in Figure 14, determine the proper resistor value between Vin and the
base to ensure that the transistor remains saturated. Assume that the transistor is
sized to handle the load requirements and the following values are for the circuit and
transistor (the transistor values are obtained from the manufactures data sheets):
VBE 0:7 V @ 25 C
VCEsat 0:7 V @ 25 C
V 24 V
Rload 2:3
1000
Vin 5 V
We want to keep the transistor operating in the saturated region to minimize heat
buildup problems. First, we can calculate the required current drawn through the
load and passing through the transistor when switched on as
V VCEsat 24 0:7
IC 10:1 amps
Rload 2:3
Using the beta factor allows us to calculate the base current required for keeping the
transistor in the saturated operating region:
IC 10:1
IB 10:1 mA
1000
Finally, we can calculate the maximum resistor value that still provides the proper
base current to the transistor:
426 Chapter 10
To ensure a small safety margin (making sure the transistor remains saturated), we
could choose a resistor slightly less than 425, resulting in a slightly larger base
current. We do want to be careful to overdrive the base as this unnecessarily puts
extra strain on the driver circuit and builds up additional heat in the resistor.
Finally, to complete our discussion on transistors let us examine eld eect and
insulated gate bipolar transistors, or FET and IGBT. A common one used in many
controller applications is the MOSFET, or metal oxide semiconductor eld eect
transistor. They are very similar to the silicon layer bipolar junction, but instead
we vary the voltage (not the current) at the control terminal to control an electric
eld. The electric eld causes a conduction region to form in much the same way that
a base current does with bipolar types. The advantage is this: The controlling cur-
rent, commonly called the gate current, sees a very high impedance (
1014 ) and
thus we do not have to worry about proper biasing as we did regarding the base
junction of our bipolar junction transistors. Figure 16 illustrates this dierence.
Since the gate impedance is so high in the MOSFET device, the gate current is
essentially zero and only the gate voltage controls the power amplication. This
allows us to directly interface a digital output with a eld eect transistor to control
actuators requiring more power. The only thing that prevents us from directly driv-
ing a MOSFET with a microprocessor (TTL) output is the voltage level. MOSFET
devices require 10 V to ensure saturation and thus a voltage multiplier circuit (or step
up transformer) may be required. Since the input impedance is very large, they can
be driven with much larger voltages without damage. To operate them as a linear
amplier, we would need to vary the gate voltage to control the power delivered to
the load.
IGBT devices combine many of the properties of bipolar and eld eect tran-
sistors. Its construction is more complex and uses a MOSFET, npn transistor, and
junction FET to drive the load, thus exhibiting a combination of characteristics. The
advantage is that we get the high input impedance of a MOSFET and the lower
saturation voltage of a bipolar transistor. Table 8 compares the characteristics of the
dierent types of switching devices with same power ratings.
Takesuye J, Deuty S. Introduction to Insulated Gate Bipolar Transistors. Application Note AN1541,
Motorola Inc. and Clemente S, Dubashni A, Pelly B. IGBT Characteristics. Application Note AN983A,
International Rectier.
From Table 8 we see the advantages and disadvantages of the dierent devices
commonly used as power ampliers and high speed solid-state relays. MOSFET
devices are generally more sensitive to temperature but are easy to interface and
very fast. IGBT devices are less resistant to temperature and still easy to interface (no
current draw) but have longer switching times. Bipolar transistors, on the other
hand, are less susceptible to static electricity and are capable of handling larger
load currents. All the transistors are still susceptible to over-voltage and should be
protected when switching inductive loads. Inductive loads, when turned o, produce
a large voltage spike. To protect the transistor it is common to include a diode to
protect the transistor. As shown in Figure 16, the diode (commonly called a yback
diode) is placed across the load to protect the transistor from the large voltages that
may occur when inductive loads are turned o. Since coils of wire, as found in
virtually all motors and solenoids, are inductors, the yback diode is a common
addition to transistor driver circuits.
By now it should be clear how we can use transistors as switches and why we
would like to be able to always keep them in the saturated region while operating.
This last reason is why PWM has become so popular. When transistors operate in
their saturated regions the power dissipation, and thus the heat buildup within the
transistor is greatly reduced.
10.9.2 PWM
PWM is a popular method used with solid-state switches (transistors) to approxi-
mate a linear power amplier without the high cost and size. Transistors are very
easy to implement in circuits when they act as switches and operate in the saturated
range, as the preceding section demonstrates. An example of this is the cost of
switched power supplies versus linear power supplies with similar ratings. Of course,
there are trade-os, and if cost, size, power requirements, and design times were not
considered, we would always choose a linear amplier. However, since cost tends to
carry overwhelming weight in the decision process, for many applications PWM
methods using simple digital signals and transistors make more sense. First, let us
quickly dene what PWM is and the basis on which it works.
428 Chapter 10
PWM can be dened with three terms: amplitude, frequency, and duty cycle.
The amplitude is simply voltage range between the high and low signal levels, for
example, 05 Volts where 0 V represents the magnitude when the pulse is low and 5 V
represents the magnitude when the pulse is high. Common voltage levels range
between 5 and 24 V. The frequency is the base cycling rate of the pulse train but
does not represent the amount of time a pulse is on. We normally think of square
wave pulse trains having equal high and low voltage signals; this is where PWM is
dierent. Although the pulses occur at the set frequency, xed by the PWM gen-
erator, the amount of time each pulse is on is varied, and thus the idea of duty cycles.
Since the pulse train operates at a xed frequency, we can dene a period as
1 1
Period T
f frequency(Hz)
The duty cycle is then the amount of time, t, which the pulse train is at the high
voltage level as a percentage of the total period, ranging between 0 and 100%.
t
Duty cycle 100%
T
At a 75% duty cycle, the pulse is on 75% of the time and only o 25% of the time.
This concept and the terms used are shown in Figure 17. Notice that the period never
changes, only the percentage of time during each period that the signal is turned on.
The idea behind using PWM is to choose the proper frequency such that our
actuator acts as an integrator and averages the area under the pulses. Obviously, if
our actuator bandwidth is very high and near the PWM frequency, we will have
many problems since the actuator is trying to replicate the pulse train and introdu-
cing large transients into our system. But, for example, if the pulse frequency is high
enough, with a 5-V amplitude and 30% duty cycle, we expect the actuator current
level to be as if it is receiving 1.5 V. Remember our basic inductor relationship where
V L di=dt, or solving for the current, i 1=L V dt; thus inductors will take the
switched voltage, integrate it, resulting in an average current. The current the device
actually uses is illustrated in Figure 18.
The goal is to cycle the pulses quickly enough that the current does not rise or
fall signicantly between pulses. Most devices that include PWM outputs will have
several selectable PWM frequencies. In general, it is best to use the highest frequency
possible. Some exceptions are when using devices with high stiction (overcoming
static friction) and some dithering is desired. In this case it is possible that too high of
frequency will result in a decrease in performance since it allows the device to
stick. A better guideline during the design process is to use the PWM signal to
average the current and to add a separately controlled dither signal at the appro-
priate amplitude and frequency. This allows us to decouple the frequencies and
amplitudes of the PWM and dither signals and optimize both eects, instead of
compromising both eects.
When we progress to building these circuits bipolar (or Darlingtons),
MOSFET, and IGBT types may all be used. The bipolar types are current driven
and the eld eect types are voltage driven. Thus, with the bipolar types we need to
size the base resistor to ensure saturation (see Example 10.2). As shown in Figure 19,
if we cannot drive the device with enough current, then we can stage them, similar in
concept to using a single Darlington transistor. In addition, there are many IC chips
available that are specically designed for driving the dierent types of transistors.
With the MOSFET devices we only need to ensure that the voltage on the gate
is large enough to cause saturation. This is usually 10 V, and therefore even though
the current requirements are essentially zero, the voltage may be greater than what a
microprocessor outputs. Sometimes a simple pull-up resistor will allow us to inter-
face MOSFETs directly with microprocessor outputs. A simple PWM circuit using a
MOSFET is given in Figure 20.
Since the resistance of a MOSFET device increases with the temperature at the
junction, it is considered stable. If our load current is too large and the MOSFET
warms up, the resistance also increases, which tends to decrease the current through
the load and resistor. If the opposite were true (as is possible in some other types),
then the transistor tends to run away since the resistance falls, it continues draw
more current and build up additional heat, self-propagating the problem. Where this
10.10 PROBLEMS
10.1 List three advantages associated with using a PC as a digital controller.
10.2 List three characteristics associated with PLCs.
10.3 What is the advantage of using dierential inputs on computer data acquisi-
tion boards (AD converters)?
10.4 Why is it benecial it have multiple user selectable input ranges on computer
data acquisition boards?
10.5 What is the minimum change in signal level that can be detected using a 12-bit
AD converter set to read 010 V, full scale?
10.6 What is the minimum change in signal level that can be detected using a 16-bit
AD converter set to read 05 V, full scale?
10.7 Describe an advantage and disadvantage of using program controlled sample
times in digital controllers.
10.8 Describe an advantage and disadvantage of using interrupt controlled sample
times in digital controllers.
10.9 Describe the primary ways in which a microcontroller diers from a micro-
processor.
10.10 List the two primary factors that aect how fast a microcontroller will
operate.
10.11 PLCs are commonly programmed using what type of diagrams?
432 Chapter 10
10.12 What term describes the elements used to carry the power and ground signals
in ladder logic diagrams?
10.13 Controllers are commonly implemented in microprocessors using algorithms
represented in the form of what type of equation?
10.14 What devices may be used to clean up noisy signals before being read by a
digital input port?
10.15 What term describes the type of optical encoder capable of knowing the
current position without needing prior knowledge?
10.16 A 16-bit rotary optical encoder will have a resolution of how many degrees?
10.17 Describe two advantages of using stepper motors over DC motors.
10.18 What type of stepper motors, permanent magnet or variable reluctance, are
generally capable of the largest torque outputs?
10.19 What type of stepper motors, permanent magnet or variable reluctance, are
generally capable of the largest shaft speed for equivalent resolutions?
10.20 With stepper motors the maximum shaft speed is a function of what two
characteristics?
10.21 List two advantages to using transistors as switching devices as opposed to
linear ampliers.
10.22 When using a transistor as a solid-state switch, it is important to operate it in
what range when it is turned on?
10.23 What are two advantages to using eld eect transistors as switching devices
as compared with bipolar transistors?
10.24 What is the importance of placing a yback diode across an inductive load
that is driven by a solid-state transistor?
10.25 Describe the three primary values used to describe a PWM signal.
10.26 The PWM period must be fast enough that the actuator responds in what way
to the signal?
10.27 What type of transistors are easily used in parallel to increase the total current
rating of the system?
10.28 Describe a similar but alternative method to PWM.
10.29 Locate the device characteristics and part numbers and sketch a circuit using a
npn bipolar transistor to PWM control a 10A inductive load.
10.30 Locate the device characteristics and part numbers and sketch a circuit using a
MOSFET to PWM control a 20A solenoid coil.
11
Advanced Design Techniques and
Controllers
1.1 OBJECTIVES
Develop the terminology and characteristics of several advanced control-
lers.
Learn the strengths and weaknesses of each controller.
Develop design procedures to help choose, design, and implement a variety
of advanced controller algorithms.
Learn some applications where advanced controllers are successful.
11.2 INTRODUCTION
In this chapter we examine the framework around advanced controller design. Some
controllers examined here are becoming more common, and it may no longer be
correct to label them advanced, although in reference to classic, continuous, lin-
ear, time-invariant controllers the term is appropriate. The goal here is to introduce
several dierent controllers, their options, relative strengths and weaknesses, and
current applications. It is hoped that this chapter whets the appetite for additional
learning. The eld is very exciting, if not overwhelming, when we follow and realize
how fast things are growing/changing/learning. Here are some introductory com-
ments regarding advanced controller design.
First, all methods presented thus far in this text relate to the design, modeling,
simulation, and implementation of feedback controllers, that is, controllers that
operate on the error between the desired command and actual feedback. In all
cases, before and including this chapter, the skills for acquiring/developing accurate
physical system models (and a good understanding of the system) are invaluable.
Although in some cases the model is only indirectly related to the actual controller
design, the knowledge of the system (and hence the model) will always help in
developing the best controller possible. In general for feedback controllers, regard-
less of algorithms or implementations, the goal is to design and tune them to capably
handle all the system unknowns (always present), which cause the errors in our
system. These errors arise primarily from disturbances and incorrect models.
433
434 Chapter 11
One problem is that feedback controllers are reactive and must wait for an
error to develop before appropriate action can be taken; thus, they have built in
limitations. Examples of previous controllers in this category include using integral
gains can help eliminate steady-state errors and state space feedback controllers
using full state command and feedback vectors. In addition to being reactive, we
seldom have access to all states and then must use observers to estimate unmeasured
states. As this chapter demonstrates, we can use the measured states to force the
observer to converge to the measured variables. Instead of waiting for errors to
develop, as in the above examples, we can, whenever possible, use feedforward
techniques to provide disturbance input rejection and minimal tracking error. In
some cases these are essentially free techniques and can provide greatly enhanced
performance. Feedforward design techniques are presented in this chapter.
Additional topics include examining multivariable controllers, where we attempt
to decouple the inputs such that each input has a strong correlation with one output.
This helps in controller design and performance.
Finally, in addition to considering the appropriate feedback and feedforward
routines, we can assess the value of using adaptive controller methods to vary the
feedback and feedforward gains for zero tracking error. Model reference adaptive
controllers, system identication algorithms, and neural nets are all examples in this
class and are presented in following sections. Some of these fall into the nonlinear
category, which brings additional concerns of stability and performance.
While in preceding chapters we developed the groundwork on which millions
of controllers have been designed and are now operating successfully, this chapter
illustrates that the previous chapters are analogous to studying the tip of iceberg that
is visible above the surface and that once we have nished that material, a whole new
world, much larger than the rst, awaits us as we dig deeper into the design of
control systems. With the rapid pace that such systems are developing, we can easily
spend a lifetime uncovering and solving new problems.
dierent values of mass, damping, etc. In this way we can see the eect that adding
additional mass might have on the stability of our system. An example is this: For a
typical cruise control system we expect the vehicle to have an average mass, including
several passengers. This is our default conguration for which the controller is
designed. A good question to ask, then, is what happens when the vehicle is loaded
with passengers and luggage? How will the control system now behave? Using root
contour lines we can plot the default loci paths, vary the mass, and plot additional
paths to see how the stability changes, allowing us to verify satisfactory performance
at gross vehicle weights.
A second way, made possible with the computer and instead of plotting multi-
ple lines, is to use the computer to vary the parameter under investigation, instead of
the gain K, solve for the system poles at each parameter change, and sequentially
plot the pole migrations on the root locus plot to see how the poles move when the
parameter is varied. This does require that the computer solve for the roots of the
characteristic equation multiple times. It is possible to sometimes move the para-
meter of interest where it behaves as a gain, allowing the standard rules to be used.
Finally, in both methods, we still have not answered the whole question of
parameter sensitivity since we have not examined the rate of change. It is possible
from each method to extract this information. With the contour plots, assuming that
we varied the second parameter by equal amounts (1; 2; 3 . . .), we can look at the
spacing between the lines to see the rate of change. If successive loci paths are close
together, the parameter does not cause a large rate of change over the existing loci
paths. However, when they are space far apart, it signies that an equal variation in
the parameter caused a much larger change in the placement of the loci paths and the
system is sensitive to changes in that parameter. In a similar fashion with the second
method, if we plot the individual points used to generate the loci, the distance
between the points indicates the sensitivity to that parameter. When the points are
far apart it signies that an equal change in that parameter caused the loci to move
much further along the loci paths. This is basis for several of the analytical methods
where we calculate the rate of change of root locations as a function of the rate of
change of parameter variations.
EXAMPLE 11.1
Given the closed loop transfer function, use Matlab to vary the gains m and b, from
1=2 to twice their nominal values, and determine how sensitive the system is to
variations in those parameters. The nominal values are
m 4; b 12; and k8
Cs k
Rs ms2 bs k
The nominal poles are placed at s 1 and s 3, two rst-order responses. What
we are interested in is how the poles move from these locations when m and b are
varied. To generate these plots in Matlab, we can dene the parameters and the
transfer function, vary the parameters, and after each variation, recalculate the pole
locations and plot them. The example code is as follows:
436 Chapter 11
%Vary m and b
mv=linspace(m/2,m*2,Points);
bv=linspace(b/2,b*2,Points);
First, the mass m is varied and each time the new roots of the characteristic equation
are solved for and plotted. The resulting root loci paths are given in Figure 1. In
similar fashion, the damping can be varied and new system poles calculated as shown
in Figure 2.
What is quite evident is that in both cases the parameter variations will cause
signicant changes in the response of our system. Remembering that the nominal
closed loop pole locations are at s 1 and 3, due to the mass and damping
variations, they either spread out along the negative real axis or become complex
conjugates and proceed toward the imaginary axis and marginal stability. As the
mass and damping parameter become further away from their nominal values, the
rate of change, or sensitivity, also decreases as noted by the spacing between succes-
sive iterations.
Using the concepts describe here, most simulation programs will allow us to
change model parameters and reevaluate the new response of the system. Even if we
have large block diagrams or higher level object models, we can still vary dierent
parameters and observe how the simulated response is aected. Since most applica-
tions are expected to experience variations of system parameters used in the model
the procedure is important if we desire to check for global stability even in the
presence of changes in our system. In other words, even if our original root locus
plot predicts global stability for all gains, it is possible with variations in our para-
meters to still become unstable. Parameter sensitivity analysis is one tool that is
available for evaluating these tendencies.
There are several physical reasons why it may not be possible to implement
feedforward control. When discussing disturbance input rejection, we need to have
some measurement reecting (the direct one is best, but not always necessary) the
current disturbance amplitude. Some disturbances are then a lot easier to remove
than others. With command feedforward, we need (in general) to know the com-
mand in advance. Or in other words, we need have access to the future values. For
many processes that follow a known command signal, this is no problem since we
know what each consecutive command value will be. An example is a welding robot
used on an assembly line where the same trajectory is repeated continually. Now let
us examine each method in a little more detail.
The eects are minimal from the disturbance when GC and G1 are large com-
pared to G2 , but the eect of the disturbance is always present, especially if G3 is
large. Now, assuming we can measure the disturbance input, let us examine the
system in Figure 4. The rst step again is to nd the transfer function between the
disturbance and system output. This can be done using our block diagram reduction
techniques covered earlier.
C G2 G3 D G1 G2 GD D GC G1 G2 C
C 1 GC G1 G2 G2 G3 G1 G2 GD D
and
C G2 G3 G1 GD
D 1 GC G1 G2
We can make several observations. The denominator stays the same and thus feed-
forward disturbance input rejection does not aect our system dynamics (i.e., pole
locations from the characteristic equation). Hence, for example, we cannot use it to
make our system exhibit dierent damping ratios. The upside of this is that we have
another tool to reduce the eects from disturbances without increasing the propor-
tional and integral gains and causing more overshoot and oscillation.
Examining the numerator, we see the opportunity to make the numerator
equal to zero. If this is possible, then in theory at least the disturbance has absolutely
no eect on our system output since C=D 0. To solve for GD , set the numerator
equal to zero, resulting in a desired transfer function for GD :
G3 G1 GD 0
G3
GD
G1
This denes our GD required to eliminate the eects of disturbances. In terms of
our cruise control example, GD is found by measuring the wind or grade, using G3 to
convert it to an estimated torque disturbance, and dividing by G1 to go from esti-
mated torque to estimated volts of command to the throttle actuator. The negative
sign implies that if the disturbance is a negative torque (i.e., downhill), the throttle
command must be decreased. It should be noted that the numerator would also be
zero if G2 equaled zero. However, this implies that no input to the physical system
will cause a change in the system output and the system itself can never be controlled.
There are several reasons why we cannot completely eliminate the disturbance
eects using controllers on real systems. First, G1 and G3 are models representing
physical components (engine and disturbance to torque relationships) and thus our
rejection of disturbances is dependent (once again) on model quality. Especially in
the case of a typical internal combustion engine, using linear models will limit the
440 Chapter 11
eectiveness of rejecting disturbances since the actual engine is inherently very non-
linear. Second, since we are measuring the disturbance, there will be both measure-
ment errors and noise along with the problem that our measurement does not solely
explain the change in system output. Finally, we have physical limitations. Some
disturbances will simply be too large for our controller to handle. For example, at
some grade the vehicle can no longer maintain the command speed due to power
limitations. In this case any controller implementation (feedforward, feedback, adap-
tive, etc.) will have the same problem in responding the disturbance.
In conclusion, disturbance rejection is commonly used to provide enhanced
tracking error without relying completely on the feedback controller. The system
dynamics are not changed and the eectiveness of the feedforward controller is
limited to modeling capabilities. In discrete systems we are limited to using dierence
equations only based on the current and past disturbance measurements.
EXAMPLE 11.2
Find the transfer function GD that decouples the disturbance input from the eects
on the output of the system given in Figure 5. Assume that the disturbance is
measurable. When we close the loop for C=D and set it equal to zero, as shown in
this section, we get the requirement that
G3 G1 GD 0
G3
GD
G1
For our system in Figure 5:
5
G1 and G3 2
s 1s 5
Thus to make the numerator always equal to zero (no eect then from the distur-
bance input), we set GD equal to
2 2
GD s 1s 5
5 5
s 1s 5
If we were able to implement GD , measure Ds, and assuming our models were
accurate, we would be able to cancel out the eects of disturbances. Even though in
practice it is dicult to completely cancel out all disturbances due to the assumptions
made while solving for GD , we can generally still enhance the performance of our
system. In this example, where we have a second-order numerator and zero-order
denominator, making it dicult to implement (requires future values in a dierence
equation), we can treat GD as a gain block that cancels out the steady-state gains of
G3 =G1 , or GD 2. This is feasible to implement and in general, if the amplier/
actuator G1 is relatively fast, still gives good performance, only ignoring the short-
term dynamics of the amplier/actuator.
With this transfer function, the inverse of the physical plant, the numerator becomes
equal to the denominator for all inputs. Physically we are solving the system such
that our inputs and outputs are reversed. We use our desired output to calculate the
inputs required to give us the physical output. As before, the extent to which feed-
forward accomplishes improved tracking depends on the accuracy of the models. If
our models are reasonably accurate, then large improvements in tracking perfor-
mance are realized.
Also similar are the physical limitations of our system. A step input for the
command will never be realized on the outputs since an innite eort (impulse) is
required to follow the command. It is therefore the job of the designer, and good
practice to begin with, to only give feasible command trajectories and avoid un-
necessarily saturating our components in the system. In the case where we do not
know and are unable to control the command input, we have limited eectiveness in
using command feedforward techniques.
If we do know the command sequence in advance, then we can also use the
alternative algorithm given in Figure 7. This conguration allows us to precompute
all the input values and thus enables us to implement methods like lookup tables for
faster processing. The system inputs now include both the reference (original) com-
mand and the feedforward command fed through the plant inverse, and the con-
troller should only be required to handle modeling errors and disturbances.
If we close the loop and develop the transfer function, we get
C GF0 GC G1 G2 GC G1 G2
R 1 GC G1 G2
Using GF0 we can still make C=R 1. When we compare this result with the transfer
function from the previous conguration, it leads to the following equivalence:
1 1
GF0 G
GC G1 G2 GC F
If we wished to precompute all the modied inputs, we would simply take the desired
command and multiply it by GF0 as shown:
1
Rwith feedforward 1 R
GC G1 G2 original
This has many advantages since the entire command signal can be precomputed and
no additional overhead is required in real time. With many industrial robots where
the same task is repeated over and over and the model remains relatively constant,
EXAMPLE 11.3
Given the second-order system and discrete controller in the block diagram of Figure
9, design and simulate a command feedforward controller. Use a sinusoidal input
(must be feasible trajectory) and compare results with and without modeling errors
present. We have a second-order physical system with a damping ratio of 1=2 and a
natural frequency of 5 rad/sec, where
!2n 25
Gs
s2 2!n s !2n s2 5s 25
We will use the Matlab commands as given at the end of this example to convert it to
the z-domain with a zero-order hold (ZOH) and a sample time of 0.05 (20 Hz). This
results in the discrete transfer function for our system:
0:02865z 0:02636
Gz
z2 1:724z 0:7788
For this example, a simple zero-pole discrete controller, similar to the one developed
in Section 9.5.1, is used as the feedback controller.
3z 2
GC z
z
After converting the system model and ZOH into the sampled domain, we can close
the loop to nd the overall discrete closed loop transfer function:
C GF0 GC G1 G2 GC G1 G2 1
and GF0
R 1 GC G1 G2 GC G1 G2
Since we only have one second-order plant, the transfer functions G1 and G2 are
combined and represented as one transfer function, GSys . Now we can form the
feedforward transfer function using our discrete controller and system model.
z3 1:724z2 0:7788z
GF0
0:08595z2 0:02177z 0:05272
When we compare the original closed loop transfer function with the transfer func-
tion containing the feedforward block, we realize that our new transfer function can
be written as (factor out GC and GSys )
0
C GC GSys 1 GF
R 1 GC GSys
Compared with the original closed loop transfer function, the only dierence is
1 GF0 and therefore
1
Rwith feedforward 1 GF0 Roriginal 1 R
GC G1 G2 original
Thus, to get the modied closed loop transfer function (containing the eects of GF0 )
we simply multiply our original closed loop transfer function by (1 GF0 ), resulting
in
Figure 11 Example: system response using Matlab (with command feedforward and no
errors in the model.
446 Chapter 11
second-order continuous system, but this one will contain the modeling error. The
new continuous system model (with errors) is
!2n 25
Gerrors s 2 2
2
s 2!n s !n s 10s 25
Using Matlab as before, convert it to the discrete equivalent with a ZOH:
z3 1:558z2 0:6065z
Gzerrors
0:0795z2 0:01429z 0:04486
Now when we create the new overall system transfer function, we will use GF0
based on the previous model, leaving our original system parameters unchanged.
This results in the new closed loop transfer function, including the feedforward
controller:
Cz 0:08596z5 0:1053z4 0:03153z3 0:08758z2 0:3371z 0:002365
Rz Model 0:0795z5 0:1195z4 0:004625z3 0:08072z2 0:3667z 0:002365
Errors
In contrast to the transfer function where our model matched the system, now the
overall system transfer function is no longer unity and we will have some tracking
errors. To show this, we can simulate the system again and compare the system
output with the desired input. This results in the plot given in Figure 12, where
additional tracking errors are evident.
Whereas the rst system, without any feedforward loop in place, was attenu-
ated and lagged the input, in this case using an imperfect feedforward transfer
function we now we see larger than desired amplitudes due to the modeling error.
The lag between the output and command, however, is removed. Thus we see that
this system is fairly sensitive to changes in system damping and care must be taken to
use the correct values. This example also serves as motivation for adaptive control-
lers, as we examine in later sections. For example, if the friction was not constant and
Figure 12 Example: system response using Matlab (with command feedforward and errors
in the model).
Advanced Design Techniques and Controllers 447
sions about how to deal with the extra degrees of freedom. This is where the term
optimal control comes from: that some rule, performance index, or cost function is
dened and the controller is optimized to minimize or maximize this function. The
function may be controller action, accuracy, power used, etc. Common methods
developed to minimize these functions include the linear quadratic regulator
(LQR), variations of least squares, and Kalman lters. Most optimal controllers
are implemented using state variable feedback, similar to the SISO examples.
Since, as we mentioned above, we cannot place all of our poles (more gains than
poles), the computational eort is greatly increased from that of earlier SISO sys-
tems. Fortunately, many design programs have the methods listed above already
programmed, and it is not necessary to do this design work manually.
The question then becomes which method to use. If the number of inputs and
outputs are relatively small, say three or less, it is quite easy to design controllers
using coupled transfer functions. It is possible to go larger, but the matrix sizes grow
along with it. Transfer functions are generally more familiar to most people and the
relationships easily dened. On the other hand, the larger the system grows, the
easier it is to work in state space. In fact, the techniques to design each optimal
controller are virtually identical (procedurally) regardless of system size. State vari-
able feedback methods work very well when good models are used. Since so much is
dependent on the model (observers, states, interaction, etc.), poor models rapidly
lead to very poor controllers. Thus, care needs to be taken in regards to developing
system models. Applications where the models are well dened have seen excellent
results when optimal controllers are used with state variable feedback. Caution is
required as the opposite end of the spectrum is also evident.
There are no arguments for the matrices since both continuous (s) and discrete (z)
transfer functions can be represented by this conguration. The rst two steps are to
determine the amount of cross-coupling and the second, to try and decouple the
inputs and outputs. To determine the coupling, the common experimental system
identication techniques applied to step inputs and frequency responses can be used.
The dierence we see here is that for each input we measure two outputs (or more or
less for general MIMO systems). For example, for the system in Figure 13, if we put
a step input on u1 , we will get two response curves, one for each output. The plot of
y1 can be used to nd g11 and the plot of y2 to determine g21 . An example is given in
Figure 14.
For the example system given where one output exhibits overshoot and the
other decays exponentially, the two curves would likely be t to the following rst-
and second-order system transfer functions, both functions of input u1 :
!2n a
g11 and g21
s 2!n s !2n
2 sa
A similar plot could be recorded to determine the remaining two transfer function
elements of G. If Bode plots were developed, higher order system models could then
be estimated from the resulting plots. Whether the transfer functions have been
above with the desired transfer function matrix when the system is decoupled and
solve for the controller that results (i.e., the only unknown matrix of transfer func-
tion in the equation). Let GD be our desired transfer function matrix (generally
diagonal if we wish to make the system uncoupled), and set the transfer function
matrix (equation of matrices) equal to it.
Setting them equal:
GD I GC G1 GC G
Solving for GC :
GC G1 GD 1 GD 1
Now GC contains our desired controller which when implemented should produce
the response characteristics dened in the diagonal terms of the desired transfer
function matrix, GD :
This is but one possible method that can be used to design controllers using
transfer functions in multivariable systems. In continuous systems it would be highly
unlikely that the resulting controller could be easily implemented using physical
analog components. With digital controllers, however, we are generally not limited
unless the controller ends up not being realizable (programmable due to the necessity
of unknown or unavailable samples) or if the physics of the system prevent it from
operating the way it should. For this method, the stability of our system is deter-
mined by the type of response we chose for the terms in the diagonal of GD . In all
cases, feasible trajectories should also be used, avoiding unnecessary saturation of
components.
In cases where the system becomes too complicated to try and decouple it
completely, we can often achieve good results by making it mostly decoupled by
minimizing the o-diagonal terms and maximizing the diagonal terms. This type of
approach is our only option if the number of inputs and outputs are not equal. The
Rosenbrock approach using inverse Nyquist arrays is based on this idea of making
our diagonal terms dominant. Other methods seeking to maximize the diagonal
Advanced Design Techniques and Controllers 453
terms relative to the o diagonal terms are the Perron-Frobenius (P-F) method based
on eigenvalues and eigenvectors and the characteristic locus method. The advantage
is very important for continuous systems since the controller can often be an array of
gains selectively chosen to achieve this trait, thus capable of being implemented.
Even for digital controllers, the work to implement is greatly reduced. These systems
generally take the form shown in Figure 17 where G is the physical system, K is the
gain matrix, and P is an optional postsystem compensator acting on the system
outputs. To check stability for such controllers, we must close each individual
loop (diagonal and coupling terms) and verify that unstable poles are not present.
There are additional stability theorems for these controller design techniques but
require more linear algebra theorems that are not covered here.
observers, or estimators as commonly called. Since all the states are seldom available,
or too costly to measure, the goal of an observer is to predict, or estimate, the
missing states. Just as we determined earlier if a system was controllable, we can
also determine if a state space system is observable. Controllability depends on the A
and B matrices such that an input is capable of producing an output and thus
controlling the system. In the same way, observers are dependent on the A and C
matrices since the system states must correlate with the system output to be obser-
vable. A system is observable, then, if the rank of the observability matrix is equal to
the system order. The observability matrix is dened as
h 2 n1 T i
MC CT AT CT AT CT j
j AT C
the order of the system relative to using a full state observer, and simplies the
algorithm and reduces the computational load. The actual model of a reduced
order state observer, shown in Figure 20, is very similar to the full state observer
but combines actual and estimated states. The reduced order state observer only
estimates the states in the z vector (not aliated with the z transform) and combines
these with the measured states using the partitioned matrix to produce the output
vector, Y, used for the feedback loop:
1
C
T
where C is the original state space matrix, and T is (no. of rows equal the no.
states no: measured outputs, columns equal the no. states) a matrix which, when
partitioned with C, produces a square matrix whose rank equals the system order.
There are usually more unknowns than equations and choices are required when
choosing T.
EXAMPLE 11.4
Describe the basic Matlab commands and tools that are available for designing
optimal control algorithms. Matlab contains many functions already developed
for the purpose of optimal controller design. LQRs are designed using dlqr and
Kalman lters (compensators) are designed using kalman. The function dlqr per-
forms linear-quadratic regulator design for discrete-time systems. This means that
the controller gain K is in the feedback where:
uk Kxk
Then for the closed loop controlled system:
xk 1 Axk Buk
To solve for the feedback matrix K we also need to dene a cost function, J:
J Sum x 0 Qx b 2 x 0 Nu
J is minimized by Matlab where the syntax of dlqr is
K; S; E dlqrA; B; Q; R; N
K is the gain matrix, S is the solution of the Riccati equation, and E contains the
closed-loop eigenvalues that are the eigenvalues of (A BK).
The function kalman develops a Kalman lter for the model described as
xk 1 Axk Buk Gwk
yk Cxk Duk Hwk vk
where w is the process noise and v is the measurement noise. Q, R, and N are the
white noise covariances as follows:
E ww 0 Q; E; vv 0 R; E wv 0 N
The syntax of kalman is
Kfilter; L; P; M; Z kalmansys; Q; R; N
This only serves to demonstrate that many controllers can be designed using
programs such as Matlab. Certainly, the brief introduction given here is meant to
point us forward to new horizons as control system design engineers. Many refer-
ences in the Bibliography contain additional material.
Concluding our discussion on multivariable controllers, we see that all our
earlier techniques, transfer functions, root locus, Bode plots, and state space system
designs can be extended to multivariable input-output systems. Even if all the states
are unavailable for measurement, we can implement full or reduced order observers
to estimate the unknown states. The larger problem arises since generally with
MIMO systems a deterministic solution is unavailable and we get more unknown
gains than equations used to solve for them. There are many books dedicated to
solving this problem in an optimal manner. Even without all the details, it is easy to
simulate many of the optimal controllers using programs like Matlab, which has
most of the optimal controllers already programmed in as design tools. In all these
controllers, it is important to develop good models. In the adaptive controller section
we will introduce some techniques that allow us to update the model in real time.
458 Chapter 11
using a simple rst- or second-order model, since it captures the dynamics of impor-
tance.
In general, though, for accurate models our input-output data must contain
sucient information to determine the best model parameters. Thus, inputs should
contain various frequency components within and around the systems bandwidth
for maximum information. Some of the components should allow the system to
nearly settle to equilibrium. If a particular frequency or amplitude is not part of
our input-output sequence, the model will not accurately reect those conditions.
There are several ways to verify the quality of the model. The rst and obvious
one is to use the nished model with a set of measured inputs and compare the model
(predicted) output with the actual measured output. This has the advantage of being
very intuitive and easy to judge the quality from. We can also examine the loss
function, which is the sum of errors squared, since it represents the amount of
variation not explained by the model.
Finally, most model structures determine the coecients of the dierence equa-
tions for a particular model. This is a natural extension of the input-output sampling
process used to collect the data and also represents a common structure in how they
are implemented back in adaptive controllers and other algorithms. Newer system
identication tools like fuzzy logic and neural nets are the exception and use con-
nection properties and function shapes to adapt the model. Dierence equations are
easy to use since we can easily convert analog models into discrete equivalents, write
the dierence equations, and use least squares techniques, as the next section shows,
to determine the coecients of the equations.
the correct model. For example, if we choose a rst-order model, containing one
parameter, the time constant, then we are limited to always minimizing the errors
based on these limits. Our model will never predict an overshoot and oscillation,
even if our system exhibits the behavior. The goal then is to choose a model that
includes the signicant dynamics (number of zeros and poles). If we have three
dominant poles, then a third-order model should produce an accurate model and
the system identication routines will converge to a solution.
The general procedure is to model our system using dierence equations since
system identication routines are implemented in microprocessors and the discrete
input-output data is easy to work with in matrix form. Our beginning point is to
dene the structure of the dierence equation in the form where
X
d X
n
ck yai ck i ybi rk i
i1 i0
EXAMPLE 11.5
Solve for the two unknown coecients, a and b, given the two linear equations.
7 3a 2b
2 6a 2b
Advanced Design Techniques and Controllers 461
The left side numbers are known outputs and the right side numbers are known
inputs. In terms of a dierence equation, the rst equation would be represented as
ck a
ck 1 b
rk 1
7a
3b
2
Therefore, if we record the inputs to the system and the resulting outputs, we can
easily t our data to dierent dierence equations to nd the best t (postprocessing
the data). To solve for our unknown coecients, we will write the equations in
matrix form as follows:
(u y
" # " # " #
3 2 a 7
6 2 b 2
To solve:
(1 (u (1 y
u (1 y
" # " #1 " #
a 3 2 7
b 6 2 2
The solution is
a1
b2
This provides the groundwork but is limited since we are tting our model coe-
cients based on only two input-output data points.
To generalize the procedure, we need to develop a method that allows us to
utilize many input-output data sets and minimizes the error to give us the best
possible model. Fortunately, there are many other applications desiring the same
solution and linear algebra methods have been developed and are easily applied to
our problem. Many mathematical texts demonstrate that a matrix, (, containing our
data, although not being a square matrix when there are more equations than
unknowns, will minimize the total sum of the squares of the error at each data
point between the observed known value and the calculated value if the new matrix
(T ( is nonsingular and the inverse exists. Taking the transpose and multiplying by
the original matrix results in a square matrix and then allows the inverse to be used
to solve for the solution to u:
(T (u (T y
1
u (T ( (T y
where u is the matrix of desired coecients, y is the matrix of known outputs, and (
is the matrix of known input points. The solution takes advantage of the linear
algebra properties and can be used to solve many dierent problems where there
are more equations than unknowns. Another benet is that the (T (, a matrix
462 Chapter 11
The least squares method seeks the solution where the sum of the squared errors is
minimized. The total summation of errors squared is dened as
!
XN
2
Sum of Errors Squared e
i1
We now have the tools required to extend the least squares method to a general
batch of input-output data used to t a particular data model.
EXAMPLE 11.6
Solve for the two unknown coecients, a and b. Use the three equations to solve for
the two unknowns
Now we will add an additional equation to the problem solved in Example
11.5. In this case we no longer get exact solutions since the number of equations is
greater than the number of unknowns. This case is more typical of what we get in
system identication. The three equations are
3a 2b 7
6a 2b 2
9a 4b 18
(u y
2 3 2 3
3 2 " # 7
6 7 a 6 7
66 2 7 6 7
4 5 4 2 5
b
9 4 18
Now our ( matrix is no longer square and we cannot simply take the inverse. To
solve this system we must now use the equation dened above as:
(T (u (T y
EXAMPLE 11.7
Use the least squares method to nd the coecients representing the best second-
order polynomial curve t. The data are given as
u inputs y outputs
0 2.22
1 3.34
2 4.90
3 6.90
4 9.34
5 12.22
6 15.54
Therefore we have six data pairs and three unknowns, b0 , b1 , and b2 . Using the least
square matrix method represented with matrices gives us
2 3 2 3
1 0 0 2:22
6 7 6 7
61 1 1 7 6 3:34 7
6 7 6 7
6 72 3 6 7
6 1 2 4 7 b0 6 4:90 7
6 7 6 7
6 76 7 6 7
6 1 3 9 76 b1 7 6 6:90 7
6 74 5 6 7
6 7 6 7
6 1 4 16 7 b2 6 9:34 7
6 7 6 7
6 7 6 7
6 1 5 25 7 6 12:22 7
4 5 4 5
1 6 36 15:54
The nal equation, expressed in more common notation, which best describes the
input-output behavior of our system is
Cz b
Rz z a
ck a
ck 1 b
rk 1
Or, rearranging
ck 1 a
ck b
rk
Thus, in the kth row, (k ck rk , where it involves both known inputs and
outputs and is called autoregressive. For the output vector, c, the kth row is ck 1.
This can be expressed in matrix form as
Advanced Design Techniques and Controllers 465
2 3 2 3
c1 r1 c2
6 7" # 6 7
6 c2 r27 6 c3 7
6 7 a 6 7
6 7 6 7
6 .. .. 7 6 .. 7
6 . . 7 b 6 7
4 5 4 . 5
cN 1 rN 1 cN
The same procedure can be followed for a second-order denominator, rst-order
numerator dierence equation given as
ck 2 a1 ck 1 a2 ck b1 rk 1 b2 rk
and
2 32 3 2 3
c2 c1 r2 r1 a1 c3
6 76 7 6 7
6 c3 c2 r3 76 7 6 c4 7
r2
6 7 6 a2 7 6 7
6 76 7 6 7
6 .. .. .. .. 76 7 6 .. 7
6 . . . . 7 4 b1 5 6 . 7
4 5 4 5
cN 1 cN 2 rN 1 rN 2 b 2 cN
Although using least square methods requires the rst several outputs to be dis-
carded (in terms of output data), this seldom poses a problem due to the amount a
data collected using computers and data acquisition boards. Many programs, includ-
ing Matlab, contain the matrix operations required to solve for the coecients. Also,
when we look at how dierent columns containing the output data, ck, are repeated
and only shifted multiples of the sample time, we can save data with how we
construct and store the matrices.
EXAMPLE 11.8
Using the input-output data given, determine the coecients of the dierence equa-
tion derived from a discrete transfer function with a constant numerator and rst-
order denominator. The recorded input and output data are
1 0 0
2 0.5 0
3 1 0.1967
4 1 0.5128
5 1 0.7045
6 0.4 0.8208
7 0.2 0.6552
8 0.1 0.4761
9 0 0.3281
10 0 0.1990
11 0 0.1207
Knowing that our sample time is T 0:1 sec allows us to take the inverse z trans-
form into the s-domain and nd the equivalent continuous system transfer function
as
Cs 1
Rs 0:2s 1
This model was in fact used to generate the example data and is returned, or veried,
by the least squares system identication routine. Also, in this example only a 2
2
matrix is inverted since we only are solving for two unknowns, even though we have
10 equations.
To conclude this section we discuss one modication that allows us to weight
the input-output data to emphasize dierent portions of our data; the method is
appropriately called weighted least squares. The solution calls for us to dene an
addition matrix W called the weighting matrix. W is a diagonal whose terms, wi , on
the diagonal are used to weight the data. With the weighting matrix incorporated,
the new solution becomes
(T W(u (T Wy
1
u (T W( (T Wy
where u is the matrix of desired coecients, y is the matrix of known outputs, ( is
the matrix of known input points, and W is the weighting matrix (diagonal matrix).
If we make W equal to the identity matrix, I, we are weighting all elements
equally and the equation reduces to the standard least squares solution developed
previously. One common implementation using the weighting matrix is to have every
diagonal element slightly greater than the last w1 < w2 <
wi . This has the eect
of weighting the solution in favor of later data (more recent) data points and
deemphasizing the older data points. One common method for choosing the values
is use the equation
wi 1 Ni
This weights the more recent data points over the past ones and produces a ltering
eect operating on the square of the error that can reduce the eects of noise in our
input-output data.
11.6.1.2 Recursive Algorithms Using Least Squares
While the least squares system identication routines from the previous section are
useful in and of themselves, they do require a matrix inversion and generally require
more processing time than is feasible to perform in between each sample. Thus, the
techniques described are batch processing techniques where we gather the data and
after it is collected, proceed to process it. If we wish to perform system identication
on-line while the process is running, we can implement a version of the least squares
routine using recursive algorithms. This has the advantage of not requiring a matrix
inversion and only calculates the change in our system parameters as a result of the
last sample taken.
The same basic procedure is used when implementing least squares system
identication routines recursively. It becomes a little more dicult to program due
the added choices that must be made. Upon system startup, the input and output
468 Chapter 11
matrices must be built progressively as the system runs. It is possible once a large
enough set of data is recorded to insert the newest point while simultaneously drop-
ping the last point and the overall size remains constant. The solution has been
developed using the matrix inversion lemma that requires that only one scalar for
each parameter needs to be inverted each sample period. Called the recursive least
squares algorithm (RLS), it only calculates the change to the estimated parameters
each loop and adds the change to the previous estimate. Computationally, it has
many advantages since a matrix inversion procedure is not required. It converts the
matrix to a form where the inverse is simply the inverse of each single value.
To develop the equations, let us rst dene our data vector, w, and as before,
our parameter vector, y. These are both column vectors given as
rT ck 1; ck 2; . . . ; ck na ; rk 1; rk 2; . . . ; rk nb
uT a1 ; a2 ; . . . ; ana ; b1 ; b2 ; . . . ; bnb
where yk is the wT u (for any sample time, k, and knowing past values); na is the
number of past output values used in the dierence equation, and nb is the number of
past input values used in the dierence equation. Recall that our ( matrix in the
preceding section contained the same data (formed from multiple input-output data
points) and can be formed from the w vectors as
2 3
w1T
6 7
6 w2T 7
6 7
(6 6 . 7
7
6 .. 7
4 5
wNT
The number or columns in ( equal to the number of parameters, na nb , and the
number of rows is equal to the number of data points used, N.
The goal in recursive least squares parameter identication is to only calculate
the change that occurs in each estimated parameter whenever another data sample is
received. First, lets examine the term (T ( and see how additional data aects it.
Dene
!1
T 1 X k
T
Pk ( ( wiw i
i1
Then
!
X
k1
T
1
P k ( ( wiw i wkwT k
T
i1
Writing P as this summation now allows us to calculate the change in P each time a
new sample is recorded since
P1 k P1 k 1 wkwT k
Remember that the solution to our system parameters is
1
u (T ( (T y
Advanced Design Techniques and Controllers 469
Now that we have current and previous values of our parameter vector, u, we
can nd the dierence that occurs from each new sample and using the matrix
inversion lemma to remove the necessity of performing the matrix inversion each
step allows us to develop the nal formulation. Two steps are required where we rst
calculate the new P matrix each step and then use it to nd the new change in the
parameter. The equations below also include the weighting eects that allow us to
favor the recent values over past values. The factor is sometimes termed the
forgetting factor since it has the eect of forgetting older values and favoring
the recent ones.
1 Pk 1wkwT kPk 1
Pk Pk 1
wT kPk 1wk
uk uk 1 Pkwk yk wT kuk 1
The general procedure to implement recursive least squares methods is to choose P
and u, sample the input and output data, calculate the updated P, and nally apply
the correction to u. For online system identication the process operates continually
while the system is running.
There are several guidelines applicable when implementing such solutions.
First, we must choose initial values for P and u. If possible we can simply record
enough initial values, halt the process, batch process (as in the previous section) the
data, and calculate P (T (1 for our initial conditions. Finishing the process will
also result in initial parameters values contained in u. If interrupting the process to
the determine P and u using this method is not feasible, then it is common to choose
P to be a diagonal matrix with large values for the diagonal terms. The parameter
vector u can be initialized as all zeros and letting it converge to the proper values
once the process begins. Finally, l is commonly chosen between 0.95 and 1 for initial
values. When 1 we get the standard recursive least squares solution.
In practice, once a certain number of data points are being used, we commonly
begin to discard the oldest and add the newest value, keeping the length of all vectors
a constant. This number is chosen such that the amount of data being used is enough
to ensure converge to the correct parameter values.
There are many alternative methods to the least squares approach that are not
mentioned here. The least squares approach is very common and fairly straightfor-
ward to program, especially for batch processing. Dierent subsets of least squares
routines use more robust matrix inversion algorithms like QR or LU decomposi-
tions. Any numerical programming analysis textbook will describe and list these
470 Chapter 11
routines. This is only an introductory discussion of the least squares methods, and
references are included for further studies. As the next section demonstrates, system
identication routines using recursive least squares (or others) adds another class of
possibilities: adaptive controllers.
the inputs it receives. These inputs into the adaptive algorithm may be command
inputs, output variables from the process, or external measurements. For example, in
hydraulic controllers the system pressure acts in series as another proportional gain
in the forward loop. Thus if the controller is tuned at 1500 psi and the operating
pressure is changed to 3000 psi, it is likely that the controller will now be more
oscillatory or unstable. The gain scheduling controller would measure the system
pressure and, based on its value, determine the appropriate gain for the system. The
gain scheduling may or may not be comprised of distinct regions. It may follow a
simple rule, for example, if the pressure doubles the electronic gain is 1=2 of its initial
value. From a practical standpoint, noise in the signals must be ltered out or the
gain will constantly be jumping around with the noise imposed on the desired signal.
The general approach is to break the system operation into distinct regions and
implement dierent controllers and/or gains depending on the region of operation.
The regions may be functions of several variables, as listed above. The regions might
be determined by the nonlinearities of the model in a way that each region is
approximately linear and allows classic design techniques to be used within each
linearized operating range. The advantage of gain scheduling, the ability to pre-
program the algorithms, is also its weakness, in that it is only adaptive to prepro-
grammed events. Because of its simplicity, it does see much use in practice. Since the
changes are predetermined, it also allows us to verify stability, at least from changes
in the controller. Changes in the system parameters may still cause the system to
become unstable.
for that one specic type. Knowing how each gain aects the response (covered in
many earlier sections) is necessary when developing the autotuning algorithm.
A commercial self-tuning PID algorithm (Kraus and Myron, 1984) is presented
here to illustrate the process of implementation with digital controllers. The process
requires the system mass and initial gains as inputs and proceeds to determine the
closed loop step response. The peak times are used to determine the damped natural
frequency, fd , and the amplitude ratio of successive peaks to determine the decay
ratio, DR. The following two equations (derived by trial and error) then calculate the
equivalent ultimate gain and ultimate period (Tu 1=fu ) as required for use by
Ziegler-Nichols methods.
Kinitial fd
K U s
and fU " #3:51
1 8 DR2 8 DR3:5
1
55 1110
Once KU and TU are found, the equations presented in Table 2 of Chapter 5 can be
used depending on the controller type being implemented. The gain equations found
using Ziegler-Nichols equations may also modied further depending the desired
response characteristics. This is just one example of an empirically based solution
to the autotuning controller.
sarily via the most direct method. That is, they always tend toward stability but at
dierent rates of decay. Exponentially stable systems decay exponentially to the
equilibrium point providing a more desirable response.
Two methods of Lyapunov equations are commonly used, the direct and indir-
ect. The indirect method involves nding the critical points of the system and solving
for the linearized system eigenvalues at each critical point. The critical points are
locations where all the derivatives are zero and thus constitute a feasible equili-
brium point for the system. In the common pendulum example, we obviously have
two equilibrium points, one stable and one unstable. By linearizing the state equa-
tions about these two points and determining the eigenvalues, the local stability
around each critical point is found. A variety of numerical methods is used to nd
the critical points. The indirect method of Lyapunov is more intuitive and bridges
the gap between our linear system tools and nonlinear stability analysis.
The second method, often called the direct method, is a rather complex topic
but does not require any approximations to be made during the stability analysis. It
can be used on any order, linear or nonlinear, time varying or invariant, multivari-
able, and system model containing nonnumerical parameters. Since it employs state
space notation, it is limited to continuous nonlinearities (eliminates many common
nonlinearities). The most dicult portion of the method is generating a positive
denite function containing the system variables. This function is commonly called
the V function, or Lyapunov function. The second method is based on the energy
method and can be summarized as follows: If the total energy in the system is greater
than zero (ET > 0) and if the derivative of the energy function is negative
(dET =dt < 0), the net energy is always decreasing and therefore the system is stable.
There are mathematical proofs available for this method (see references), but the
general idea is somewhat intuitive. Being based on the energy method, a good
beginning attempt at nding a V function that is positive denite and where the
partial derivative exists is to use the sum of the kinetic and potential energies in the
system. Several methods for nding Lyapunov functions have been developed, the
Krosovski, Variable gradient, and Zubovs construction methods are examples of such.
Gamble J, Vaughan N. Comparison of Sliding Mode Control with State Feedback and PID Control
Applied to a Proportional Solenoid Valve. Journal of Dynamic Systems, Measurement, and Control,
September 1996.
478 Chapter 11
but does provide advantages over simply linearizing and then designing around a
single operating point.
Yasunobu S, Miyamoto S. Automatic Train Operation System by Predictive Fuzzy Control. Industrial
Applications of Fuzzy Control, ed. M. Sugeno, North-Holland, Amsterdam, 1985.
Advanced Design Techniques and Controllers 479
cally prove a systems stability apart from a mathematical model. Critics of this
position quickly point out that linear models are seldom valid throughout a systems
operating range and therefore also do not guarantee global stability.
This section is only an introduction and designed to explain the basic theory
and implementation techniques of fuzzy logic. The easiest way to begin is to describe
fuzzy logic as a set of heuristics and rules about how to control the system. Heuristic
relates to learning by trying rather than by following a preprogrammed formula. In a
sense, it is the opposite of the word algorithm. It therefore is a human approach
to solving problems. We seldom say it is 96 degrees Fahrenheit and therefore it must
be hot, rather we say is hotter than normal. In a similar fashion fuzzy logic is based
on rules of thumb. Thus, instead of the input value being larger or smaller than
another, it may be rather close or very far from the other number. This is done
through the use of membership functions that take dierent shapes.
Where fuzzy logic works well is with complex processes without a good math-
ematical model and highly nonlinear systems. In general, conventional controllers
are as good or better if the model is easily developed and fairly linear where common
design techniques may be applied. The question then arises as to under what circum-
stances are fuzzy logic techniques particularly attractive. Circumstances in favor of
using fuzzy logic include when a mathematical model is unavailable or so complex
that it cannot be evaluated in real time, when low precision microprocessors or
sensors are used, and when high noise levels exist. To implement a fuzzy logic
controller, we also have several conditions that must be met. First, there needs to
be an expert available to specify the rules describing the system behavior and, sec-
ond, a solution must be possible.
Although fuzzy logic is often described as a form of nonlinear PID control, this
limited understanding does not encompass the whole concept. The idea stems from
the many reports on using fuzzy logic with rules written in the same way that PID
algorithms operate. For example, with an SISO system a rule might read if the error
is positive big and the error change is positive small, then the actuator output is
negative big. This simply results in a nonlinear PD controller. A better application
to illustrate the concept of fuzzy logic is the automatic transmission in vehicles.
Standard control algorithms must make set decisions based on measured inputs;
fuzzy algorithms are able to apply sets of rules to the inputs, infer what is desired,
and produce an output. Because of this inference, a fuzzy controller will respond
dierently as dierent drivers operate the vehicle. For example, a fuzzy system is able
to judgments about the operating environment based in the measured inputs. This is
where the expert enters the picture. The rules are written by experts who realize that
people prefer to not continually shift up and down on winding roads but do need to
quickly downshift if on a level road and desiring to pass another vehicle. Thus, we
write the rules such that if the throttle is uctuating by large amounts, as if on a
winding road, to not continually shift the transmission and yet if the throttle is
relatively constant before undergoing a change, to quickly shift the transmission.
It is along these lines that expert knowledge is used to describe typical driving
behavior and infer what the transmission shift patterns should be. The benet of
fuzzy logic is the ease that such rules can be written and implemented in a controller.
Once written, it is also easier for other users to read the rules, understand the
concept, and make changes, instead of pouring over many details hidden in math-
ematical models.
480 Chapter 11
The best way to demonstrate the concept and terms of fuzzy logic controllers is
by working a simple example. The next section works through a common simple
example of a fuzzy logic controller, controlling the speed of a fan based on tempera-
ture and humidity inputs.
11.9.3.1 Fuzzy Logic ExampleFan Speed Controller
The goal of this example is to introduce the common terms and ideas associated with
fuzzy logic controllers within the framework of designing a fan speed controller.
There are two sensors for the system, temperature and humidity, and they are
used to determine the speed setting of the fan. The rules are written using everyday
language in the same way that we would decide what the fan speed should be. Thus,
for this example, we get to be the expert.
First let us explain some denitions used with fuzzy logic. Whereas classical
theory distinguishes categories using crisp sets, with fuzzy logic we dene fuzzy sets,
as shown in Figure 28. Using the temperature analogy, with a crisp set we might say
the any temperature less than 40 F is cold, a temperature between 40 F and 75 F is
warm, and above 75 F is hot. Clearly with the crisp set we would have people who
still think that 41 F is cold even though it is classied as warm. Similarly, 39:9 F is
classied as cold even though 40:18F would be classied as warm. A more natural
representation is found with the fuzzy set where everyone might agree that below a
certain temperature (40 F) is cold and that above a certain temperature (65 F) it is
no longer cold. Between those two temperatures fall people who each think dier-
ently about what should be called cold or warm.
The sets of data are called membership functions, and although straight line
segments are used in Figure 28, it is not required, and dierent shapes will be shown
later in this example. The expert who has knowledge of the system determines the
appropriate membership function. The level to which someone belongs is called the
degree of membership (mx). With the crisp set either you belong or you do not. This
is like buying a membership at a health club. We cannot say please give me 30%
membership for this month. We either belong or we do not. With the fuzzy set it is
possible to have full membership, no membership, or some intermediate value. Now
we can belong partially to one set, and as Figure 29 shows, at the same time also
belong partially to another set. Now we see in the fuzzy set that between 40 F and
65 F we belong to both the cold and warm set at the same time. This is where the
term fuzzy is appropriate since we are both cold and warm at the same time.
The scope is the range where a membership function is greater than zero and
the height is the value of the largest degree of membership contained in the set. For
the fuzzy warm set the scope is 40 F to 85 F and the height is 1. The height of any
function is commonly set to one although it is not constrained such that is has to be.
There are many possibilities for membership function shapes, as shown in
Figure 30. Ideally we know enough about our system that we initially choose the
membership function that best describes the characteristics. It is not required that we
choose the exact one or even that it exists, and a primary method of tuning fuzzy
logic controllers is by changing the shape of the membership functions. It may help if
it can be described with a mathematical function, although simple lookup tables are
commonly used when implemented in microprocessors.
In addition to triangles, trapezoids, s, z, Normal, Gaussian, and Bell curves,
and Singletons, other shapes can be used. Design tools like Matlabs Fuzzy Logic
Toolbox contain a variety of membership functions, as shown in Figure 31.
If we wish to modify shapes we use what is called hedging. Recall that our
degree of membership is represented by mx, which at this point we will assume is
between zero and one. If we raise m to dierent powers, we change the shape of the
original membership function described by mx. The use of hedging in this way is
shown in Figure 32. Since mx is less than 1, when raised to a coecient greater than
1 it becomes more constricted and when the coecient N is less than 1 it becomes
more diused. We may use words like very, less, extremely, and slightly when we
write the rules for our system. We can implement them using hedges. For example,
with our fan speed controller, we may wish to know if it is hot, or very hot.
Finally, let us look at one more denition before we move more fully into fuzzy
logic design. Now that we can dene membership functions using a variety of shapes,
we need to learn how to combine them since it is possible they may overlap, as when
we were simultaneously cold and warm at the same time. We combine the member-
ship functions using logical operators: And (minimum), Or (maximum), Not,
Normalization, or Alpha-Cuts. There are others but the concepts can be explained
using these listed. As Figure 33 shows, when membership functions overlap the
dierent logical operators result in dierent overall membership shapes.
The logical operators AND, OR, and NOT each result in dierent combina-
tion of the fuzzy sets. The norm operator takes the mean of the membership func-
tions and the -cut operator places a line between 0 and 1; any portions of
membership functions above are included in the combination. As will become
more clear as we progress through this example, using these operators allow us to
remove (or decide about) some of the ambiguity of being both warm and cold at the
same time.
To begin the process of putting the denitions and concepts together, let us
examine the overall picture of how the denitions above t in to fuzzy logic control
system design. The basic functional diagram of a fuzzy logic controller is given in
Figure 34. The middle block containing our rules is inference based and comes from
our knowledge of how our system should perform. At rst glance it seems that since
we start and end with crisp data that the fuzzy logic controller is only extra work on
our part. We certainly are constrained to start and end with crisp data since sensors
and actuators do not eectively transmit or receive commands like warm or cold.
Our controller still must receive and send signals such as voltages or currents.
However, what the fuzzication and defuzzication allow us to do is describe and
modify our system using rules that we all can understand. As opposed to developing
detailed mathematical formulas describing the rules of our system, we simply gra-
phically represent our membership functions and dene our rules to design our
controller. As mentioned earlier, since it is assumed that a mathematical model
does not exist, we need to have some knowledge and experience with the actual
system to write meaningful rules. Even in the case of two inputs and one output,
as in this example, where we can ultimately describe the fuzzy logic controller as an
operating surface, the method to develop the surface is more intuitive and easier to
modify than obtaining the same results through mathematical models, trial-and-
error, or extensive laboratory testing. As Figure 34 illustrates, we still have a unique
output (crisp) for any given combination of inputs, and fuzzy logic techniques
provide tools to develop the nonlinear mapping using the two crisp sets of data.
Referencing Figure 34, we can dene the following terms:
Fuzzication: The process of mapping crisp input values to associated fuzzy
input values using degrees of membership.
Now that all our linguistic variables are dened, we can write the rules. The
rules are simply based on our knowledge of the system, which for this example, we all
have some knowledge of. We will use nine rules, shown in Table 3, to map our fuzzy
inputs (temperature and humidity) into a fuzzy output (fan speed). The way the rules
are currently stated, using AND, provides a minimum, since both inputs must be
true. If we changed to using OR we would get a maximum number of active rules
since either condition could be true to have a non-zero rule output. For larger
systems the logical operators may be combined in each rule. It is easy for two inputs
and one output to develop an FAM, shown in Figure 39.
To nish this example and see how the actual procedure works, let us choose
an input temperature of 80 F and a humidity of 45%. Figure 40 shows us our
memberships in cold, warm, or hot, using our temperature input of 80 F. We
have no membership in cold, 0.25 in warm, and 0.75 in hot. Performing the same
process with our humidity of 45% leads to Figure 41, with a degree of membership
equal to 0.2 in low, 0.8 in medium, and 0.0 in high.
With the two inputs dened and the membership functions calculated, we are
ready to re the rules, or perform the implication step. Using the AND operators will
provide the minimum outputs for this example, as shown in Table 4. After ring each
rule we see that only four rules are active (4, 5, 7, and 8) and that rules 5 and 7 are
mapped to the same output. If we combine rules 5 and 7 using OR (maximum), we
end with a degree of membership for medium equal to 0.25. The results from Table 4
can also be graphically illustrated using the FAM, shown in Figure 42.
At this point the only item left is defuzzication of the fuzzy output to produce
a crisp output value for the fan speed. As with the inputs, we have many options in
how we choose to combine the dierent membership functions. First, we can take the
values for the outputs of each membership function after ring the rules (Table 4)
and overlay them with our output membership functions from Figure 38. When each
membership function is clipped, the combined function becomes as shown in Figure
43. During implication, each degree of membership is used to clip the corresponding
output variable, slow, medium, or fast. For aggregation we again have many options
(Figure 33) for combining the three membership functions. Using the maximum
(OR) for each function produces the nal function given in Figure 44.
To determine the nal crisp output value, the goal of this entire process, we
apply a defuzzication method, some of which are listed here:
Bisector
Centroid: often referred to as center of gravity (COG) or center of area
(COA)
Middle of maximum (MOM)
Largest of maximum (LOM)
Smallest of maximum (SOM)
As with the dierent membership functions, Matlabs Fuzzy Logic Toolbox contains
a variety of defuzzication methods, as shown in Figure 45.
The actual output speeds that result from applying the Centroid, LOM, MOM,
and SOM methods are shown in Figure 46. The dierent output speeds, depending
on the method used, are
Table 4 Implications: Firing and Rules for Controlling the Fan Speed (Temperature
808F, Humidity 45%)
1 IF temp is cold (0.0) AND humidity is low (0.2), THEN speed is slow (0.0).
2 IF temp is cold (0.0) AND humidity is avg (0.8), THEN speed is slow (0.0).
3 IF temp is cold (0.0) AND humidity is high (0.0), THEN speed is medium (0.0).
4 IF temp is warm (0.25) AND humidity is low (0.2), THEN speed is slow (0.2).
5 IF temp is warm (0.25) AND humidity is avg (0.8), THEN speed is medium
(0.25).
6 IF temp is warm (0.25) AND humidity is high (0.0), THEN speed is fast (0.0).
7 IF temp is hot (0.75) AND humidity is low (0.2), THEN speed is medium (0.2).
8 IF temp is hot (0.75) AND humidity is avg (0.8), THEN speed is fast (0.75).
9 IF temp is hot (0.75) AND humidity is high (0.0), THEN speed is fast (0.0).
Advanced Design Techniques and Controllers 489
Figure 42 Fuzzy association matrix (FAM) for fan speed output after ring the rules and
taking the minimums (AND operator).
Figure 43 Combined membership functions for fan speed output after ring the rules
(temperature 80 F, humidity 45%).
490 Chapter 11
learning rule (change in weighting function proportional to the error between the
input and output), and least squares have all been used to structure the learning
process. Where neural nets have been successful is in applications requiring a process
that can learnapplications like highly nonlinear systems control, pattern recogni-
tion, estimation, marketing analyses, and handwriting signature comparisons.
As technology develops, the number of neural net applications increases and
many noncontroller applications, such as modeling complex business or societal
phenomenon, are now being done with the concepts of neural nets.
Senecal P, Reitz R. Simultaneous Reduction of Engine Emissions and Fuel Consumption Using Genetic
Algorithms and Multi-Dimensional Spray and Combustion Modeling. CEC/SAE Spring Fuels &
Lubricants Meeting and Exposition, SAE 2000-01-1890, 2000.
492 Chapter 11
and other intelligent systems the initial strategies come from experts in the respective
elds.
Hopefully this chapter stimulated further studies with these (and other)
advanced controllers. There are remarkable advancements made almost every day,
and new exciting applications are always being developed. Many of the concepts in
this chapter are founded upon the material in the previous chapters.
11.10 PROBLEMS
11.1 Briey describe the goal of parameter sensitivity.
11.2 Feedforward controllers are reactive. (T or F)
11.3 Feedforward controllers can be used to enhance what two areas of controller
performance?
11.4 Feedforward controllers change the stability characteristics of our system.
(T or F)
11.5 To implement disturbance input decoupling, we must be able to ___________
the disturbance.
11.6 Describe the role of our system model when used to implement command
feedforward algorithms.
11.7 Describe two possible disadvantages of using command feedforward.
11.8 When are observers required for state space multivariable control systems?
11.9 List two possible advantages of using observers.
11.10 In general, least squares system identication routines solve for the para-
meters of ______________ equations.
11.11 What are the primary dierences between batch and recursive least squares
methods?
11.12 Desribe an advantage and disadvantage of adaptive controllers.
11.13 What is the goal of an MRAC?
11.14 Why is an expert of the system being controlled a requirement for designing
fuzzy logic controllers?
11.15 What are linguistic variables in fuzzy logic controllers?
11.16 What is the model for neural net controllers?
11.17 What are two advantages of genetic algorithms?
11.18 Find the transfer function GD that decouples the disturbance input from the
eects on the output of the system given in Figure 48. Assume that the disturbance is
measurable.
11.19 Given the second-order system and discrete controller in the block diagram of
Figure 49, design and simulate a command feedforward controller when the sample
time is T 0:1 sec. Use a sinusoidal input with a frequency of 0.8 Hz and compare
results with and without modeling errors present. For the modeling error, change the
damping from 6 to 3. Use Matlab to simulate the system.
11.20 Using the input-output data given, determine the coecients, a and b, of the
dierence equation derived from a discrete transfer function with a constant numera-
tor and rst-order denominator. Use the least squares batch processing method. The
model to which the data will be t is given as
Cz b
Rz z a
1 0 0
2 0.5 0
3 1 0.1813
4 1 0.5109
5 1 0.7809
6 0.4 1.0019
7 0.2 0.9653
8 0.1 0.8628
9 0 0.7427
10 0 0.6080
11 0 0.4978
11.21 Using the input-output data given, determine the coecients, a1 , a2 , b1 , and
b2 , of the dierence equation derived from a discrete transfer function with a con-
stant numerator and second-order denominator. Use the least squares batch proces-
sing method. The model to which the data will be t is given as
Cz b1 z b
Rz z2 a1 z a2
1 1 0
2 1 0.7000
3 1 1.0169
4 1 1.0168
5 1 1.0010
6 0 0.9993
7 0 0.2999
8 1 0:0169
9 1 0.6832
10 1 1.0159
11 0 1.0175
11.22 Using the denitions (membership functions, rules, etc.) for the fuzzy logic fan
speed controller in Section 11.9.3.1, determine what the fan speed command would
be if the humidity input is 60% and the temperature is 60 F. Approximate the fan
speed for
1. LOM defuzzication
2. MOM defuzzication
12
Applied Control Methods for Fluid
Power Systems
12.1 OBJECTIVES
Develop analytical models for common uid power components.
Demonstrate the inuence of dierent valve characteristics on system per-
formance.
Develop feedback controller models for common uid power systems.
Examine case study of using high-speed on-o valves for position control.
Examine case study of computer control of a hydrostatic transmission.
12.2 INTRODUCTION
Fluid power systems, as the name implies, rely on uid to transmit power from one
area to another. Two common classications of uid power systems are industrial
and mobile hydraulics. Within these terms are a variety of applications, as Table 1
shows. The general procedure is to convert rotary or linear motion into uid ow,
transmit the power to the new location, and covert back into rotary or linear motion.
The primary input may be an electrical motor or combustion engine driving a
hydraulic pump. The common actuators are hydraulic motors and cylinders. The
downside is that every energy conversion results in a net loss of energy and eciency
is therefore an important consideration during the design process. Figure 1 shows the
general ow of power through a typical hydraulic system where arrows pointing
down represent energy losses in our system. The energy input and output devices
primarily consist of pumps (input), motors, and cylinders. Of primary concern in this
chapter is the energy control component, usually accomplished through the use of
various control valves (pressure, ow, and direction). In addition to these basic three
categories, many required auxiliary components are necessary for a functional sys-
tem. Examples include reservoirs, tubing or hoses, ttings, and appropriate uid. It
is also usual procedure to add safety devices (relief valves) and reliability devices
(lters and oil coolers).
Most valves, controlling how much energy is delivered to the load, do so by
determining the amount of energy dissipated before it reaches the load. This method
495
496 Chapter 12
has two negative aspects associated with it: excessive heat buildup and large input
power requirements. Alternative power management techniques are discussed later
in this chapter in Section 12.6. Since using valves to control the amount of energy
dissipated provides good control of our system, it remains a popular method of
controlling uid power actuators. This concept of tracking energy levels throughout
our system is shown in Figure 2. We see the initial energy input provided the pump,
slight losses occurring in the hoses and ttings, energy loss over the relief valve
(auxiliary component) to provide constant system pressure, and the variable energy
loss determined by the spool position in the control valve. The remaining energy is
available to do useful work.
The remaining sections in this chapter provide an introduction into control
valves, how they are used in control systems, strategies for developing ecient and
useful hydraulic circuits, and two case studies of similar applications.
During actual operation, the pressure control valves are in constant movement,
modulating to maintain a force balance. As presented next, dierent valve designs
have dierent characteristics. Pressure control valves are generally described by two
models: force balance on the spool (valve dynamics) and pressure-ow relationships
(orice equation).
12.3.2.1 Main Categories
Two main categories exist for pressure control valves:
Pressure relief valves (normally closed [N.C.] valve, which regulates the
upstream pressure);
Pressure reducing valves (normally open [N.O.] valve, which regulates the
downstream pressure).
The respective symbols used to describe the two broad types of pressure control
valves are given in Figure 3. With the pressure relief valve the inlet pressure is
controlled by opening the return or exhaust port against an opposing force, in this
case a spring. The inlet pressure acts to open the valve. For the pressure reducing
valve the operation is similar, except now opening the return or exhaust controls the
outlet pressure port. The downstream pressure is used to close the valve. If the back
pressure never increases, the valve remains open. Thus, the pressure relief valve
controls the upstream pressure and the pressure reducing valve controls the down-
stream pressure.
The pressure relief valve family is the more common of the two and is exam-
ined in more detail in the next section. Most valves are designed to cover a specic
range of pressures. Operation outside of these ranges may result in reduced or failed
performance.
12.3.2.2 Pressure Relief Valves
Pressure relief valves are included in almost every hydraulic control system. Two
common uses for pressure relief valves are
Safety valve, limiting the maximum pressure in a system;
Pressure control valve, regulating the pressure to a constant predetermined
value.
Figure 3 Symbols used for pressure relief and pressure reducing valves.
Applied Control Methods 499
When used as a safety valve, the goal is to ensure that the valve opens (and relieves
the pressure) before the system is damaged. This conguration does not normally
rely on the valve to modulate the pressure during normal operation. A pressure
control valve is used where the system is expected to always have extra ow passing
through the valve, which then maintains the desired system pressure. The valve in
this conguration is constantly active during system operation. Pressure relief valves
can be used to perform other functions in hydraulic circuits, but the basic steady-
state and dynamic characteristics of the valves remain the same.
Ball type pressure control valves are the simplest in design but have very
limited performance characteristics. As the ow increases, the ball has a tendency
to oscillate in the ow stream, causing undesirable pressure uctuation. Due to the
limited damping, the ball tends to remain oscillating once it has begun. These oscil-
lations cause uid-borne noise (pressure waves) that may ultimately cause undesir-
able air-borne noise. Ball type relief valves are primarily used as safety type relief
valves. As shown in Figure 4 the pressure acts directly on the ball and is balanced by
the spring force. Once the spring preload force is exceeded, the valve opens and
begins to regulates the pressure. By changing from the ball to the poppet as
shown in Figure 5, stability is enhanced since the poppet tends to center itself better
within the owstream. The stability improvement is evident over a wider ow range.
There is still little damping in many poppet type pressure control valves due to
the lack of sliding surfaces. To further enhance stability we can use guided poppet
valves. Guided poppet direct-acting relief valves can pass ows with greater stability
than the previous valves. The added stability is created from the damping provided
by the mechanical and viscous friction associated with the guide of the poppet.
However, this design must ow the relieved oil through the cross drilled passage
ways within the poppet. These holes, shown in Figure 6, cause a restriction, thereby
limiting the ow capacity of the valve. A primary limitation of a direct operated
poppet type relief valve is its limited capacity. This limitation occurs since the spring
force must be large enough to counteract (balance) the system pressure acting on the
entire ball or poppet area. In larger valves, the spring force simply becomes unrea-
sonably large.
The dierential piston type relief valve is designed to overcome this problem.
While still in the poppet valve family, this design reduces the eective area upon
which the pressure acts. As shown in Figure 7, the pressure enters the valve from the
side and acts only on the ring area of the piston. The remaining piston area is acted
upon by tank pressure. This allows the spring providing the opposing forces to be
sized much smaller.
A problem arises in this design when trying to reseat the valve. When the valve
is opened and oil starts to ow over the seat, a pressure gradient occurs across the
poppet surface. This creates a force tending to keep the valve open. This force causes
signicant hysteresis in the valves steady-state pressure-ow (PQ) characteristics.
Adding the button to the base of the poppet (shown in Figure 7) improves the
hysteresis by disturbing the ow path. The button captures some of the uids
velocity head forces and tends to create a force helping to close the piston.
Unfortunately, the button creates an additional restriction to the ow, reducing
the overall ow capacity. Increasing volumetric ows demand similarly increasing
through-ow cross-sectional areas and, to balance the larger forces, stronger springs.
Eventually, a point will be reached where the items become too large and a pilot
operated valve becomes the valve of choice.
A pilot-operated valve consists of two pressure control valves in the same
housing (Figure 8). The pilot section valve is a high pressure low ow valve which
controls pressure on the back side of the primary valve. This pressure combines with
a relatively light spring to oppose system pressure acting upon the large eective area
of the main stage.
Several advantages are inherent to pilot-operated relief valves. One, they exhi-
bit good pressure regulation over a wide range of ows. Two, they only require light
springs even at high pressures. And three, they tend to minimize leakage by using the
system pressure to force the valve closed. During operation, when the pilot relief
valve is closed (system pressure less then control pressure), equal pressures are acting
on both sides of the main poppet. Since the poppet cavity area is greater than the
inlet area, the forces keep the valve tight against the seat, thus reducing leakage. The
light spring maintains contact at low pressures.
When the system pressure overcomes the adjustable spring force on the pilot
poppet, a small ow occurs from system to tank through the pilot drain. Forces no
longer balance on the main poppet since the pilot stage ow induces a pressure drop
across the orice, thus lowering the cavity pressure. This pressure dierential across
the main spool causes the spool to move, opening the inlet to tank. Once system
pressure is reduced, the valve is once again closed. The valve is constantly modulat-
ing the system pressure when in the active operating region.
Another class of pressure control valves, which may be direct acting or use
pilot stages, is the spool type. The inherent advantage in this type of valve is that the
pressure feedback controlling the valve and the ow paths are now decoupled,
whereas with the poppet valves the ow path and the area upon which the pressure
acted is the same. Figure 9 illustrates a basic direct acting spool type relief valve. In
spool type pressure control valves, the pressure acting on an area is still balanced by
a spring, but the main ow path is not across the same area. There is a piston on the
spools whose lands cover or uncover ports, allowing the system pressure to be
controlled. Lands and ports are discussed in more detail in later sections. Many
times, sensing pistons are used to allow higher pressure ranges with reasonable
spring sizes.
Adding a sensing piston to the direct-acting spool type relief valve allows the
same pressure to be regulated with a much smaller spring size. The sensing piston
area and spring forces must still balance for steady-state operation. A force analysis,
based on the pressure control valve model in Figure 10, demonstrates this. Since
both sides of the main spool are at tank pressure, the only force that spring needs to
balance is the pressure acting on the area of the sensing piston, AP . This allows for
much higher control pressures with smaller springs.
closed position. Tighter tolerances to reduce this leakage will generally lead to
more expensive valves.
In general, a characteristic curve may be generated for a relief valve, revealing
three distinct operating regions of the valve, given in Figure 11. The rst region is
where the supply pressure is not large enough to overcome the spring force acting on
the spool. The valve is closed, sealing the supply line from the tank line. The second
region occurs when the supply pressure is large enough to overcome the spring force
but not large enough to totally compresses the spring. This is called the active region
of the valve. In the active region, the relief valve is attempting to maintain a constant
system pressure at some preset value. The system should be designed to ensure that
the valve operates in this region. The change in pressure from the beginning of the
active region (cracking pressure) to the end is often called the pressure override. The
third region occurs when the supply pressure is large enough to completely compress
the spring. This only occurs when the size of the relief valve is such that it cannot
relieve the necessary ow to maintain a constant pressure. When the valve is in this
region, it acts as though it were a xed orice, and the pressure drop can be deter-
mined from the orice equation.
The dierent valve types discussed exhibit dierent steady-state and dynamic
response characteristics. Spool type valves, while slightly slower, have greater damp-
ing and therefore less overshoot and more precise control. Spool type valves come
the closest to approaching an ideal relief valve curve. Poppet valves open quickly but
are generally under damped and tend to oscillate in response to a step change in
pressure. Pilot-operated poppet relief valves generally give better controllability than
direct acting poppet relief valves, as noted in the steady-state PQ curve in Figure 12.
Additional valves in the pressure control category have been developed for
dierent applications. The symbols for several of these valves are shown in Figure
13. The unloading valve is identical to the relief valve with the exception that control
pressure is sensed through a pilot line from somewhere else in the system. Therefore,
ow through the valve is prevented until pressure at the pilot port becomes high
enough to overcome the preset spring force. This valve may be used to unload the
circuit based on events in other parts of the system. Big power savings are possible
with this type of valve since the main system ow is not dumped at high pressures
The number of positions in a directional control valve are of two kinds: innitely
variable and distinct positions. This dierence is reected in the symbols given in
Figure 18. Distinct position valves have more limited roles in control systems since
the valve position cannot be continuously varied.
The number of ways a valve has is equal to the number of ow paths. Common
two-way and four-way valves are shown in Figure 19. Center positions are com-
monly added to describe the valve characteristics around null spool positions. The
number of lands vary from one on the simplest valve to three or four on common
valves to ve or six on more complex valves. Each land is like a piston mounted on a
central rod which slides within the bore of the valve body. The rod and pistons
together are called a spool, hence spool type valves. As the spool moves, the lands
(or pistons) cover and uncover ports to provide passage ways through the valve
body. Common two, three, and four land valve symbols are shown in Figure 20.
The center conguration is one of the most important characteristics of direc-
tional control valves used in hydraulic control systems. There are three common
classications of center congurations:
Under lapped (open center);
Zero lapped (critical center);
Over lapped (closed center).
An under lapped, or open centered, valve has limited use because a constant power
loss occurs in the center position. In addition, an under lapped valve will have lower
ow gain and pressure sensitivity. Open center valves are more common in mobile
hydraulics where they are used to provide a path from the pump to reservoir when
the system is not being used (idle times). This provides signicant power savings
since the pump is not required to produce ow at high pressure. The ow paths are
always open as shown in Figure 21.
A zero lapped, or critical center, valve has a linear ow gain as a function of
spool position. This requires that the lands be very slightly over lapped to account
for spool to bore clearances. This conguration is typical for most servovalves and is
shown in Figure 22.
The critical center valve is attractive for implementing in a control system since
a linear model can be used with good results. In addition, response times can be
faster than closed center valves since any spool movement away from center imme-
diately results in ow. In general, critical center valves will be between 0% and 3%
overlap, with most being less than 1% (quality servovalves).
An over lapped valve has lands wider than the ports and exhibits deadband
characteristics in the center position, as shown in Figure 23. Although there is over-
lap with the spool, even in the center position there is a leakage ow between ports
due to clearances needed for the spool to move. Although minimal in amounts, it
may have a great eect on stopping a load. Proportional valves generally exhibit
varying degrees of overlap. The amount of overlap is generally related to the cost.
In addition to these, there are many specialty congurations designed for
specic applications. One of the most common types are tandem center valves,
often used to unload the pump at idle conditions while blocking the work ports
and holding the load stationary. The graphic symbol is shown below in Figure 24.
Unloading the pump provides energy savings while blocking the work ports holds
the load stationary.
There are many other center types, including blocking the P port and connect-
ing A and B to tank in the center position. This is a common type where the valve is
the rst stage actuator for a larger spool. The center type allows the large spool to
center itself (spring centered, no trapped volume) when the smaller rst stage valve is
centered. Additional center types allow for motor freewheeling, dierent area ratios
for single ended cylinders, etc. The dierent specialty centers are often designed using
grooves cut into the valve spool. By changing the size and location of the grooves,
the dierent center types and ratios can be designed into the valve operation. The
grooves are also used to shape the metering curve and are a factor in determining the
ow gain of the valve.
Due to their limitations, on-o directional control valves are seldom used in
applications where accurate control of the load is required. More time in this section
is spent discussing proportional and servo directional control valves while a case
study using on-o valves in place of directional control valves is presented later.
Figure 27 Double solenoid with centering springs and spool position measurement for
electronic feedback (proportional valve).
Applied Control Methods 513
centered between the two nozzles and equal ows are found on the left and right
outer paths. Since each path has an equal orice, the pressure drops are the same and
each end of the spool sees equal pressure. The spool remains centered. With a
counterclockwise torque, the right nozzle size is decreased while the left nozzle outlet
area is increased. This results in a reduced right-side ow and an increased left-side
ow. Remembering the orices, the right-side (reduced ow) has less of a pressure
drop than the left and the pressure imbalance accelerates the spool to the left. A thin
feedback wire connecting the spool and apper creates a correcting moment that
balances the torque motor. In this fashion the spool position is always proportional
to the torque motor current (after transients decay). The advantage of this system is
that the electrical actuator (torque motor) only needs a small force change that is
immediately amplied hydraulically. The resulting large hydraulic force rapidly
accelerates the spool leading to high bandwidths.
Since the valve is dependent on two orices and small nozzles, it is very sensi-
tive to contamination. In addition, there is a constant leakage ow through the pilot
stage whenever pressure is applied to the valve. Most servovalves include internal
lters to further protect the valve. Servovalves are a good example of using simple
mechanical feedback devices (the feedback wire) to signicantly improve a compo-
nents performance. Because it is a feedback control system, the same issues of
stability must addressed during the design process.
The design of the servovalve results in two advantages over typical propor-
tional valves regarding use in closed loop feedback control systems. First, the
hydraulic amplication that takes place in the servovalve enables it to have greater
bandwidths than proportional valves. Second, they are usually held to tighter toler-
ances and designed to be critically centered with zero overlap of the spool. As the
next section illustrates, this leads to signicant performance advantages and mini-
mizes the nonlinearities.
Figure 31 Bridge circuit model for a four-way directional control valve (including the load
and coecients).
516 Chapter 12
The area ratio is dimensionless and represents the amount of valve opening. The
equation can be further simplied dening the percentage of spool movement, x, as
A
x 1 x 1
AFO
x can be treated as a dimensionless control variable representing the full range of
spool movement. When x 1, the valve is fully open and we can nd the general
valve coecient. For open and closed center valves, this equation is true only while
the valve is in the active operating region.
p QFO
Q KV x P ; when fully open and x 1; then KV p
PFO
Since valves are generally rated at full open conditions for ow at a rated pressure
drop, the nal substitution using the rated ow and pressure, Qr and Pr , leads to
Qr
KV p
Pr
In the above equations KV is the valve coecient for the entire valve since the model
accounted for the total valve pressure drop. Two lands are active when the valve is
shifted and the total pressure drop must be split between the two. Since the same
ow rate is seen by each active land, a symmetrical valve would have equal pressure
drops. The following analysis allows both coecients to be determined. The reduced
problem consists of two valve orices, and the load orice as shown in Figure 32.
Using Kirchos voltage law analogy (pressure drops) and the ow relation-
ships for each pressure drop allows us to relate the rated ow to each pressure drop.
Q2r Q2
Pr PPA PBT 2
2r
KPA KBT
Now we can factor out Qr and divide through both sides, resulting in
Pr 1 1
2 2
Q2r KPA KBT
This is simply the parallel resistance law (where the resistance is K2 ). Now, take the
reciprocal of both sides:
Q2r K2 K2
2PA BT2
Pr KPA KBT
When we take the square root of both sides we achieve a familiar term on the left-
hand side of the equation:
Q KPA KBT
pr p
P KPA 2 K2
BT
Comparing the equation with our initial denition for the valve coecient allows us
to relate the total valve coecient to the individual orices as
Qr KPA KBT
p KV p
Pr KPA 2
KBT 2
The value for KV just developed is for one direction of spool movement. Assuming
the valve is symmetrical and dening two valve coecients for each direction of ow
through the valve allows the nal parameter to be dened.
Thus, the nal general representation of the PQ characteristics of a directional
control valve, where KV is the valve coecient for both active lands, is given as
p
Q KV x PS PL PT
and if PT
0, then
p
Q KV x PS PL
These denitions will be helpful when simulating the response of valve controlled
hydraulic systems and many of the intermediate equations are used when calculating
valve coecients, individual orice pressure drops, etc. Although a relatively simple
equation can be used to describe the steady-state behavior of the valve, we need to
have linear equations describing the PQ relationships if we wish to use our standard
dynamic analysis methods from earlier chapters. The linear coecients may be
obtained by dierentiation of the PQ equations (see Example 2.5) or graphically
from the plots developed experimentally, as the next several sections demonstrate.
12.4.1.1 PQ Metering Characteristics for Directional Control Valves
The PQ curve is produced when x is held constant at several values between 1 and
1. It can be plotted using the valve equation once our valve coecient is known.
p
QL KV x PS PL PT
From the equation it is easy to see that the ow goes to zero as the load pressure
approaches the supply pressure. At this condition, there is no longer a pressure drop
across the valve, and thus no ow through the valve. Since PL is varied and is under
the square root, we will get a nonlinear PQ curve. Plotting the equation as a function
of load pressure, PL , and load ow, QL , produces the PQ characteristic curves of the
valve given in Figure 33. Each line represents a dierent spool position, x.
Several interesting points can be made from the PQ metering curve. First, once
our required load pressure, PL , approaches the system pressure the load ow, and
thus the actuator movement, becomes zero. Second, even when there is no load
pressure requirement (i.e., retracting a hydraulic cylinder), there is a nite velocity
called the slew velocity because of the pressure drops across the valve. Finally, the
Applied Control Methods 519
load ows continue to increase when a positive x and negative load is encountered.
This is called an overrunning load and will be discussed more in later sections.
Overrunning loads are prone to cavitation and excessive speeds.
These data are also possible from laboratory tests using a circuit schematic,
such as the one given in Figure 34. Obtaining experimental data veries the analy-
tical models and often is the easiest way to get model information for the design of
feedback control systems.
Figure 35 Flow metering curve for dierent center congurations of directional control
valves.
Experimentally, the same test circuit given in Figure 34 is used for measuring
the ow metering characteristics. The variable load valve (orice) allows dierent
valve pressure drops, developing a family of ow metering curves for the valve. The
ow metering curve gives additional information and allows us to determine the
Pressure drops across the lands;
Valve coecients;
Flow gain;
Deadband;
Linearity;
Hysteresis.
The equation used to generate the ow metering data is identical to the one used for
the PQ curve but in developing the ow metering plots the pressure drop across the
valve is held constant and the valve position is varied. Additional information about
the valve linearity is available since the nonlinear term in the equation is held con-
stant. This implies that in the active region of a valve we should see linear relation-
ships.
The ow metering plot also highlights the dierences between dierent valve
types (servovalve, proportional valve, open center) and alerts the designer to the
quality of the valve. The PQ curves will look similar for similarly sized valves regard-
less of the valve overlap, since the curves are generated with the valve in the active
region and only two orices are active for each type of valve. The valve coecients
are available from both plots. An example ow metering plot from a typical propor-
tional valve is given Figure 36, showing the various operating regions. As expected,
the plots are fairly linear in the active region. The design of a control system that uses
proportional valves should choose a valve size that enables all the desired outputs to
be achieved while the valve operates in the active region.
Figure 36 Flow metering curve for a typical closed center directional control valve.
even from the supply directly to tank. The pressure metering curve for actual direc-
tional control valves have a slope to them as depicted in Figure 37.
The pressure metering curve lls in the missing gaps and provides information
on valve behavior in the deadband region where the load ow is zero. The deadband
region may be less then 3% for typical servovalves and up to 35% for some propor-
tional valves. This characteristic is critical in control system behavior for such
actions as positioning a load with a cylinder and maintaining position with changing
loads. This is because while we are controlling the position, for example, on a
cylinder, holding a constant position should result in no load ow through the
valve. Thus, it is the pressure metering curve that the valve is operating along
while maintaining a constant position. If a valve has a large deadband, it must travel
across it before a complete pressure reversal can take place and hold the load. The
slope of the curve in Figure 37 is generally designated the null pressure gain or
pressure sensitivity of the valve. As the valve overlap is increased, the pressure
gain is decreased. For this reason, servovalves, due to minimal spool overlap, are
capable of much better results in position control systems.
Figure 37 Pressure metering curve for a typical closed center directional control valve (no
load ow through valve).
522 Chapter 12
dv d 2x
PA ABE PB ARE FL m m 2
dt dt
If we assume that the acceleration phase is very short relative to the total stroke
length (which it usually is, even though during the acceleration phase the inertial
forces may be very large), the acceleration can be set to zero and in steady-state form
the equation becomes
PA ABE PB ARE FL 0
Now we can describe the pressure drops and ows in terms of the force and velocity.
First, dene the pressure drops in the system:
PA PS PPA and PB PBT (assuming PT 0
Now the pressure drops across the valve can be described using their orice equa-
tions:
Q2PA Q2BT
PPA PBT
x2 KPA
2 x2 KBT
2
Remember that x represents the percentage that the valve is open (only in the active
region) as dened in the directional control valve models (Sec. 12.4.1).
The cylinder ow rates, assuming no leakage within the cylinder, are dened as
dPBE
QS v ABE CBE
dt
dP
QT v ARE CRE RE
dt
where C is the capacitance in the system. If compressibility is ignored or only steady-
state characteristics examined, the capacitance terms are zero and the ow rate is
simply the area times the velocity for each side of the cylinder. It is important to note
that the ows are not equal with single-ended cylinders as shown above. For many
cases where the compressibility ows are negligible the ows are simply related
through the inverse ratio of their respective cross-sectional areas. In pneumatic
systems the compressibility cannot be ignored, and constitutes a signicant portion
of the total ow. If compressibility is ignored the ratio is easily found by setting the
two velocity terms equal, as they share the same piston and rod movement.
ABE
QBE Q
ARE RE
If we combine the ow and pressure drop equations with the initial force balance we
can form the general valve-cylinder equation as
A3BE A3
PS ABE v2 2 2
v2 2 RE2 FL 0
x KPA x KBT
We can simply the equation by combining the velocities. This results in the nal
equation describing the steady-state extension characteristics of a valve controlled
cylinder.
!
v2 A3BE A3RE
PS ABE 2 2
2 FL 0
x KPA KBT
Remember that this is for extension only and when the cylinder direction is reversed
the pump ow is now into the rod end. The same procedure can be followed to derive
the valve controlled cylinder equation for retraction as
!
v2 A3BE A3RE
PS ARE 2 2
2 FL 0
x KAT KPB
526 Chapter 12
Many useful quantities can be determined from the nal equations by rearranging
and imposing certain conditions upon them. The stall force is the maximum force
available to move the piston and occurs when the cylinder velocity is zero. Since
cylinder movement begins from rest, the maximum force is available to overcome
static friction eects. The stall forces can be calculated as
Extension: PS ABE FLext v0
Retraction: PS ARE FLret v0
Since the supply pressure acts on a larger (bore end) area during extension, the
maximum possible force is larger than for retraction where the rod area subtracts
from the available force.
A second operating point of interest is the slew speed, occurring where the load
force is equal to zero. This is the maximum available velocity, unless the cylinder
load is overrunning. Setting the load forces equal to zero and x equal to one (100%
open for maximum velocity) produces the following equations:
v
u P A
Extension: vslew u
u 3
S BE
tABE A3RE
2
2
KPA KBT
v
u P A
Retraction: vslew u
u 3
S RE
t ABE A3RE
2
2
KAT KPB
It is useful to develop a curve from the resulting equations, as they represent the end
points of normal operation. For cylinder extension, we get the plot in Figure 39. This
curve may be produced by two primary methods. If all valve and cylinder para-
meters, and the supply pressure, are known, then the curve can be generated easily
with a computer program such as a spreadsheet. If the system is in the process of
being designed, then it is useful to know that the shape of the curve is a parabola in
the rst quadrant. The nal curve then, including the eects of overrunning loads
and negative load forces and cylinder velocities (extension and retraction directions),
is given in Figure 40.
The outermost possible line, when the valve is fully open, represents the oper-
ating envelope of system performance. Where the lines fall between the axes and the
outermost limit is determined by the spool position, x, in the valve. Each constant
value of x will produce a dierent line. The outermost envelope can only be increased
by changing the valve coecients, cylinder areas, or by increasing the supply pres-
sure.
The goal in using these equations during the design process is to properly
choose and size the hydraulic components that will enable us to meet our perfor-
mance goals. Remember that if a system is physically incapable of a response, it does
not matter what controller we use, we will still not achieve our goals.
operating point. In doing this, it can be shown the maximum power point occurs at a
load force equal to two thirds the stall force. The power curve is added to the valve
controlled cylinder curve in Figure 41. Thus to design for maximum power, choose a
stall design force of 1.5 times the desired operating load force. When a similar
analysis is completed for determining the minimum supply pressure necessary in
meeting a specic operating point, once again the same criterion is found.
Designing for maximum power results in requiring the minimum supply pressure
needed. Remember that the above equations assume a pump capable of supplying
the required pressure and ow rate.
@Q @Q
Q x PL Kx x Kp PL
@x @PL
Kx is the slope of the active region in the valve ow metering curve (see Figure 36) at
the operating point and Kp is the slope of the valve PQ curve (see Figure 33) at the
operating point. The piston ow is related to the cylinder velocity as
dx
QvAA
dt
Notice that this assumes a double-ended piston with equal areas, A. Subsequent
analyses will remove this (and the linear) assumption. For now, we let PL be the
pressure across the piston and include damping b to sum the forces on the cylinder,
where
d 2x dx
F m 2
PL A b
dt dt
The damping, b, arises from the friction associated with the load and cylinder seal
friction. To form the block diagram, we rst solve the valve ow equation for PL :
Kx 1
PL x Q
Kp Kp
These three equations are used to form the system portion of our block dia-
gram. PL is the output of the summing junction, x and Q are the inputs to the
summing junction, and the force equation provides a transfer function relating PL
to x. The basic block diagram pictorially representing these equations is given in
Figure 42. Although we have linearized the model we can still add nonlinear blocks
using simulation packages like Matlab/Simulink. If we were to incorporate the valve
and cylinder model into a closed loop control system, there are several useful blocks
that we can add. First, we need to generate a command signal for the system. In
Simulink are several blocks for this purpose. The common ones include the Signal
Generator, Constant, Pulse Train, and Repeating Sequence. It is easy to feedback
the cylinder position and add another summing junction, the output of which is our
error and the input into our controller. As with the signal blocks there are numerous
controller choices contained in Simulink. The standard PID (and a version with an
approximate derivative) is an option, along with function blocks, fuzzy logic, and
neural net blocks, and a host of others. By adding a zero-order hold (ZOH) block we
can also simulate a digital algorithm. The output of the controller would go to a
valve amplier card, providing the position input for the valve. Since the valve has
associated dynamics, we can add a transfer function relating the desired valve com-
mand to the actual valve position. As mentioned previously, these transfer functions
can be obtained from analytical or experimental (step response, Bode plots, or
system identication) methods. Finally, it is important to model the deadband and
saturation limits for the valve, both readily accessible from Figure 36.
When these are added to the model, as shown in the Simulink model given in
Figure 43, we can quickly compare dierent component characteristics and dierent
controllers. In general, we can get the required parameters from manufacturers data
and estimate the performance that dierent valves and cylinders might have in our
Figure 43 Simulink modelvalve control of cylinder with deadband and saturation eects.
If desired, we could have just as easily implemented the actual dierence equa-
tions using the function block. The dierence equation for a PI controller after
collecting the ek and ek 1 terms is
T T
uk uk 1 Kp Ki ek Ki Kp ek 1
2 2
In the Simulink function block, this is represented by
u1 Kp Ki T=2 u2 Ki T=2 Kp u3
where ui is the ith input to the function block and u1 uk 1, u2 ek, and
u3 ek 1.
From the block diagram in Figure 45 we see how the delay blocks are used to
hold the previous error and controller output for use in the dierence equation. One
of the more interesting dierences with the digital controller is the eect that sample
rate has on stability. To highlight this eect for the same gain, Figure 46 gives the
responses to a step command with sample times of 0.1, 0.8, and 1.6 sec when the
proportional gain is held constant at Kp 5.
The same procedure can be done with proportional gain as the variable and
leaving the sample time, T, equal to 0.1 sec. Adjusting the proportional gain to
values of 5, 25, and 50 produces the response curves given in Figure 47. It is clear
from the response plots the dierences between marginal stability caused by sample
time and marginal stability resulting from excessive proportional gain. The longer
sample time tends to push the roots out the left side of the unit circle in the z-plane
and the increasing proportional gain out the right hand side of the unit circle.
These simple examples illustrate the benets of simulating a system. Many
simulations are possible in the space of several minutes. Although the usefulness
to determine the gain required in the actual real-time controller is limited to the
accuracy of our model and thus not extremely accurate for all operating ranges
(model is linearized about some point), the ability to perform what-if scenarios for
all variables in the model is extremely benecial. For example, once the model is
built, we can easily determine the feasibility of using the same valve to control the
position of a dierent load simply by changing the parameters in the physical system
block. Also, by simply changing the feedback from position to velocity, another
entire capability can be examined.
Finally, these same methods can be applied to directional control valves used
to control rotary hydraulic actuators, usually in the form a hydraulic motor. Instead
of the pressure acting on an area to produce a force, the pressure acts on motor
displacement to produce a torque. The output of the system transfer function,
P=! 1=Js b, is now angular velocity instead of linear velocity. Velocity is
also the feedback signal to be controlled in most rotary systems.
Designing (or choosing) the correct combination of components has a large eect on
the operating characteristics and potential of the system. Each method has dierent
advantages and disadvantages, especially in terms of eciency, speed, force, and
power.
The choice of method depends in large part on the goals of the system. Eciency can
be increased but usually at an upfront cost that is greater. The expected load cycles
will play a role in determining the appropriate circuit. Circuits with large idle times
will not want to be required to input full power in circuit at all times; instead, a
system that only provides the power required by the load would be desired. Besides
these considerations, the experience of the designer and availability of components
place practical constraints on the choice of circuits and strategies.
Some of the simplest circuits are designed to control the pressure with a con-
stant ow being produced by the pump. In this case we have a xed displacement
(FD) pump providing a constant ow and some arrangement of valves controlling
the pressure and where the ow is directed to. An example constant ow circuit is
given in Figure 48. The ow from the pump is either diverted over the relief valve,
delivered to the load, or delivered back to the reservoir when the tandem center valve
is in the neutral position. This circuit has advantages over one with a closed center
valve since under idle periods (no power required at the load) the tandem center
valve allows the pump to still have a constant ow but at a signicantly reduced
pressure. A closed center valve requires all the ow to be passed through the relief
valve and the power dissipated is large. While an open center valve provides the same
power savings at idle conditions, it will not lock the load in xed position as the
tandem center valve does. The valve coecients and cylinder size can be chosen to
meet the force and velocity requirement of our load using the method in Section 12.5.
One problem with using tandem center valves is that they limit the eectiveness
of using one pump to provide power for two loads, as shown in Figure 49. As long as
one valve is primarily used at one time, the circuit works well and provides the same
power savings at idle conditions. However, since the valves are connected in series
once one cylinder stalls, the other cylinder also stalls.
The next class of circuits involve varying the ow to control the power deliv-
ered to the load. Once again, there are available power savings when using these
types of circuits. There are two primary methods of varying the ow based on the
system demand: accumulators and pressure-compensated pumps. This means that
the initial cost of the system will be greater but so will be the power savings.
Accumulators come in a variety of sizes and ratings and generally are one of two
types: piston and bladder. For both types the energy storage takes place in the
compression of a gas, usually an ideal inert gas such as nitrogen. Their electrical
Figure 49 Constant ow circuit (FD pump and dual tandem center valves).
analogy is the capacitor, and they are used in similar ways to provide a source of
energy and maintain relatively constant pressure levels in the system. An example
circuit using an accumulator is given in Figure 50.
If we know the required work cycle of the actuator, as is the case in many
industrial applications, the accumulator provides a way to have the pump be sized to
provide the average required power and the accumulator is used to average out the
high and low demand periods. This provides signicant power savings. An example
would be a stamping machine where the amount of time for extension and retraction
are known along with the amount of time it takes to load new material onto the
machine. The peak power requirement can be very large even though the average
required power is much less. Notice that a check valve and unloading valve are used
Figure 50 Variable ow circuit using accumulator, unloading valve, and closed center con-
trol valve.
536 Chapter 12
to provide additional power savings during long idle periods. Once the relief pressure
is achieved, the unloading valve opens and allows the pump to discharge ow at a
much lower pressure. The check valve prevents ow from owing back into the
pump. We are also required to use a closed center valve since an open center (or
tandem) valve in the neutral position would allow the accumulator to discharge.
A second way to achieve variable ow in our circuit is through the use of a
variable displacement (VD) pump. When the loop is closed internally in the pump,
the system pressure can be used to de-stroke the pump and reduce the ow output.
This feedback mechanism and variable displacement pump combine to make a
pressure compensated pump. The ideal pressure compensated pump, without losses
and a perfect compensator, will exhibit the performance shown below in Figure 51,
acting as an ideal ow source until it begins to pressure compensate and as an ideal
pressure source in the active (compensating) region.
In reality there will be losses in the pump and the curve has a slight decrease in
ow as the pressure increases. In the compensating range the operating curve gra-
dually increases in pressure as the ow decreases due to the additional compression
of the compensator spring and less pressure drops associated with the ow through
the pump. Since power equals pressure times ow, when we are able to operate in the
compensating region we are able to save signicant power. Only the ow that is
required to maintain the desired pressure is produced by the pump. An example
circuit using a pressure compensated pump and closed center valve is shown in
Figure 52.
There are many subsets of the circuits presented thus far and a sampling of
them are presented in the remainder of this section. The ones presented are not
comprehensive and only serve to illustrate the many ways that hydraulic control
systems can be congured and optimized for various tasks. In many cases these
circuits are now controlled electronically and play a large roll in determining the
performance of our system.
The two additional classes examined here are pressure control and ow control
methods. In most cases they may be constructed using constant ow or variable ow
components. Many of the valves used in these circuits are discussed in more detail in
Section 12.3. To begin with, there are situations where we need two pressure levels in
our system and yet desire to have only one primary pump. To provide two pressures
with one pump we can use a pressure reducing valve as shown in Figure 53.
Figure 52 Variable ow circuit using pressure compensated pump and closed center valve.
We limit our power-saving options in this type of circuit since we cannot use
open center valves to unload the pump when not needed. Also, the method by which
the lower pressure is reached is simply another form of energy dissipation in our
system. The pressure-reducing valve converts uid energy into heat when regulating
the pressure to a reduced level. This circuit is attractive when the actuator requiring
the reduced pressure does not draw signicant power (less ow and thus less loss
across the valve) and when an additional component that requires the lower pressure
is added to an existing circuit. If designing upfront, a better goal would be to use
components that all operate with identical pressures.
A second pressure control circuit is the hi-lo circuit, shown in Figure 54. The
basic hi-lo circuit consists of two pumps designed for two dierent operating regions.
The rst pump is a high pressurelow ow pump that always contributes to the
circuit. The second pump is a low pressurehigh ow pump that is unloaded when
the system pressure exceeds a preset level. This has several benecial characteristics.
When the load force is minimal, the required pressure is low and the ow of both
pumps add together and move the cylinder quickly. When the load increases, the
large pump is unloaded and only the smaller high pressurelow ow pump is used to
move the load. The large pump is protected from high pressure through the use of a
check valve. Since the full ow capability of the system is not produced at high
pressures we experience signicant power savings. Remote pilot-operated relief
valves, accumulators, and computer controllers are all additional methods of con-
trolling pressure levels in hydraulic systems.
To wrap us this section let us conclude by examining several ow control
circuits. Whereas pressure control eectively limits the maximum force or torque
that an actuator produces, ow control devices limit the velocity of actuators. One
simple method to control the speed is by using a ow control valve in series with our
actuator. If our ow control valve limits the inlet ow to the actuator, we call it a
meter-in circuit, as shown in Figure 55. Using the internal check valve in the ow
control valve is required if we only desire to meter the inlet ow. When the cylinder
(in this case) retracts, the ow is not metered and simply passes through the check
valve. This circuit works well when the load is resistive and counteracts the desired
motion. If this is the case the bore end always maintains a positive pressure and
cavitation is avoided. If we begin to encounter overrunning loads, a meter-in circuit
will tend to cause cavitation in the cylinder since it is an additional restriction to the
inlet ow. This problem is solved by using a meter-out circuit, shown in Figure 56.
pump produces a constant ow, any excess of which must be dumped over the relief
valve at the system (high) pressure. The only time this circuit is ecient is during
periods of high loads where the largest pressure drop occurs at the load (useful work)
and very little is lost across the relief valve or control valve. The worst condition, at
idle (no ow to the actuator), causes all pump ow to be passed through the relief
valve at the relief pressure. Almost all the power generated by the pump is dissipated
into the uid and valve in the form of heat. This signicantly increases the cooling
requirements for our system.
The FD/OC system, an example of which is given in Figure 48, acts similarly
with a constant ow but exhibits increased eciencies at null spool positions, where
the valve unloads the pump. Although the pump still operates at the maximum ow,
it is at reduced pressure. The amount of pressure drop across the valve in the center
position determines the eciency of the system at idle. When operating in the active
region of the valve, the system exhibits the same eciencies as the FD/CC system.
This type of circuit is common in many mobile applications.
Once we commit to a variable displacement pump (now a variable ow circuit
instead of constant ow as in the previous two types), our eciencies can be sig-
nicantly improved. The variable displacement pump conguration of interest is
pressure compensation, the ideal operation being shown in Figure 51. The PC/CC
system increases system eciency even further since the pump only provides the ow
necessary to maintain the pressure. The ow is very close to zero at null spool
positions. The compensator maintains a constant pressure in our system by varying
the ow output of the pump. Now the wasted (dissipated) energy in our system
primarily occurs across our control valve since we still have full system pressure
on the valve inlet. The relief valve, although still included for safety, is inactive
during normal system operation. Another advantage is that multiple actuators can
be used and all will have access to the full system pressure. With the FD/OC system
described previously, if one OC valve is in the neutral position the whole system sees
the lower pressure. A negative to using pressure compensated pumps is the higher
upfront cost for the pump, although it will likely save in other areas (heat exchan-
gers) and certainly in the operating costs. The other disadvantage is that the pump
still produces system pressure even at idle and even though it is not always required.
This leads to the nal alternative, a pressure compensated load sensing circuit.
The load sensing pump/valve system exhibits the best eciency by limiting
both the ow and pressure, providing just enough to move the load. Instead of
producing full ow as with the xed displacement pumps or full pressure as with
the pressure compensated pump, the system regulates both pressure and ow to
always optimize the eciency of our system. The system is virtually identical to
the PC/CC system except for that the load pressure, instead of the system pressure,
is used to control the ow output of the pump. An example PCLS circuit using a
variable displacement pump is shown in Figure 59. With this system the pressure is
always maintained at a preset P above the load pressure. Shuttle valves are used to
always choose the highest load pressure in the system. The force balance in the
compensation valve is such that system pressure is balanced by the highest load
pressure plus an adjustable spring used to set the dierential pressure between the
load and system. If the load pressure decreases, the valve shifts to the left and the
displacement is decreased, decreasing the system pressure and maintaining the
desired pressure dierence. The only negative is that the load sensing circuit must
542 Chapter 12
Figure 59 Pressure compensated load sensing (PCLS) using a variable displacement pump.
use the highest required load pressure or some actuators may not operate correctly.
If the two actuators have very dierent pressure requirements, the system eciency is
still not maximized. If we do not wish to add the extra cost of the a variable
displacement pump we can achieve many of the PCLS benets by using a load
sensing relief valve, shown in Figure 60.
The operation is much the same as with PCLS except that the system pressure
is maintained at a preset level above the load pressure by adjusting the relief valve
cracking pressure. As before, the system pressure is compared with the largest load
pressure. The dierence is that we once again have a constant ow circuit and no
longer vary the ow but instead vary only the relief pressure. This means that at idle
conditions the same level of eciency is not achieved since although the pressure is
the same, the ow is greater over the relief valve. This circuit has eciency advan-
tages over the FD/OC circuit of earlier since when the control valve is in the active
region the system pressure is still only slightly above the load pressure, whereas with
the FD/OC system the pressure returns to maximum once the valve is moved from
Figure 60 Load sensing relief valve (LSRV) using a xed displacement pump.
Applied Control Methods 543
the null position into the active region. These strategies can be summarized as shown
in Figure 61.
Remember that the eciencies for the systems will decrease when dierent
pressure requirements exist in our systems. In the load sensing systems the pressure
is maintained above the largest required load pressure. If there is one high load
pressure and three lower ones, the control valves will provide the pressure drop as
required per individual actuators.
To achieve the maximum possible system eciency and most wisely manage
our power, we can progress to full computer control of the pump displacement, valve
settings, and actuator settings. For example, if we have both a variable displacement
pump and a variable displacement motor controlled by a computer, we can com-
pletely eliminate the control valve and associated energy losses (pressure drops) and
use the computer to match displacements in such a way as to control the system
pressure and the power delivered to the load. The only energy losses in our system
are associated with component eciencies, not pressure drops across control valves.
These same concepts can be extended to larger systems where multiple displacements
are controlled to always give the maximum possible system eciency.
As our society becomes more energy conscious, we will continue to develop
new power management strategies. It is likely that the computer will be the center-
piece, controlling the system to achieve greater levels of performance and eciency
simultaneously.
Sosnowski T, Lucier P, Lumkes J, Fronczak F, Beachley N. Pump/Motor Displacement Control using
High-Speed On/O Valves. SAE 981968, 1998.
Applied Control Methods 545
Table 3 Potential Advantages of Using On-Off Valves for Position Control of Hydraulic
Cylinder
exibility. Without going into the details of each circuit (some are discussed in the
previous section), we can quickly discuss some of the possibilities when using indi-
vidual poppet valves to control cylinder position. Most advantages stem from the
fact that the metering surfaces are now decoupled. In our typical spool type direc-
tional control valve, when one metering surface opens, so does the corresponding
return path. The metering surfaces are physically constrained to act this way. Thus
when a certain valve characteristic or center conguration is desired, a specic valve
must be purchased with this conguration. This is opposed to the four poppet valves
where any valve can open and close independently of each other, as in Figure 63.
Taking advantage of this exibility requires digital controllers but adds the
ability to choose (or have the controller automatically switch between) many dier-
ent circuit behaviors. For example, in a meter-out circuit the ow out of the cylinder
is controlled to limit the extension or retraction speed of the cylinder. This is com-
monly done with overrunning loads to prevent cavitation and run-away loads. Using
conventional strategies requires an additional metering valve in the circuit since the
directional control valve will meter equally in both directions. With the poppet
valves, however, simply keeping the inlet valve between the tank and inlet port
fully open allows the downstream, or outlet, valve to be modulated and control
the rate of cylinder movement. To get metering characteristics in both directions,
even more metering devices and check valves are required to complete the circuit. A
simple change in algorithm accomplishes the same thing using independent control
of the poppet valves. Thus, the hardware and plumbing remains identical and the
controller is used to provide many dierent circuit characteristics.
In addition to dierent metering circuits, dierent power management strate-
gies are available since the valves can be operated to simulate dierent valve centers
in real time. If all the valves are closed when the system is stationary, the poppet
valves will act as a closed center spool valve. In this case the cylinder remains
stationary and the pump ow must be diverted elsewhere. Open center character-
istics occur when all the poppet valves are opened. Even tandem center spools, which
allow the pump to be unloaded to tank while the cylinder is xed, are easily simu-
lated by closing the two valves resisting the load (assuming the load wants to move in
one direction) and opening the remaining two to unload the pump.
It is obvious that only one side of the story has been presented, and the dis-
advantages must also be considered. First and primary, the operation of the poppet
valves are primarily on-o and somewhere along the line the output must be modu-
lated. This could possibly be a feature of special valves where some proportionality is
achieved in poppet valves by rapidly modulating (i.e., PWM) the electrical signal to
the valve. If a two-stage poppet valve is used, the rst stage could be extremely fast
and used to modulate the second stage, again with the goal of obtaining some
proportionality for metering control. The digital computer can be used to great
advantage here since it is capable of learning or using lookup tables to predict
what result the signal change will have on the system, even if very nonlinear.
Finally, the poppet valve, if much faster than the system, could be modulated in
much the same way that a PWM signal is averaged in a solenoid coil to produce an
average pressure acting on the system. This obviously has the potential to create
undesired noise in the system. In some position control systems this is acceptable
when the result is not the physical cylinder position. For example, and as illustrated
in this case study, we could use this approach to control the small cylinder that in
turn controls the displacement on a variable displacement hydraulic pump/motor.
The output of the hydraulic pump/motor will not signicantly change when small
discrete changes occur in displacement (especially when connect to a large inertia,
i.e., wheel motor applications), since the torque output is even further averaged by
the inertia it is connected to. Thus there are many things we need consider when
designing the controller, hence the incentive to develop the following model.
from which the equivalent blocks (some nonlinear) are found and used to construct
the block diagram. In the basic model we have the supply pressure connecting with
two valves and the tank connected to two. The cylinder work ports are connected to
the opposite sides in such a way that pairs can be opened to position the cylinder
where desired. Referencing Figure 62 allows us to directly draw the bond graph from
the simplied physical model, given in Figure 64.
In general, power ow will be from the power supply to the high pressure
cylinder port and the low pressure cylinder port to the tank. Figure 65 gives the
corresponding bond graph and illustrates the parallels it shares with the physical
system model. The bond graph structure and physical model complement each
other and the power ow paths through the physical system are easily seen in the
bond graph. There are ve states, each associated with an inertial or capacitive
component:
1
p_13 q
CA 15
s s
1 1 1 A
q_13 KPA x1 PSA q KAT x2 q PT p13 BE p21
CA 15 CA 15 IA m
1
p_16 q
CB 18
s s
1 1 1 A
q_18 KPB x3 PSB q KBT x4 q PT p16 RE p21
CB 18 CB 18 IB m
ABE A b
p_21 q RE q18 p (force balance on cylinder)
CA 15 CB I21 21
The units for the hydraulic components using the English system are
the pressures on the cylinder since all the valves are closed and the position only
changes due to compressibility in the system.
Things are a little more interesting when we examine the valves during this
motion. In Figure 68 we see the on-o nature to this type of controller (valve itself is
not proportional in this model). The poppet valves are constantly turning on and o
to modulate the position of the cylinder and thus the velocity is constantly cycling
even though the position output follows the command quite well. The system can be
made to oscillate but it will not go unstable (globally) with this type of controller. If
the error becomes negative and outside of the deadband, it always tries a corrective
action.
In conclusion, then, we have briey examined two approaches to developing
and simulating a cylinder position controller using four poppet valves in place of the
typical spool valve. As demonstrated, it is very easy to run many simulations without
having to build and test each system in the lab. We must remember it is still only a
model, and although it includes the nonlinearities associated with the valve, it
ignores others like friction eects, valve response times and trajectories, eects of
ow forces on the valve, etc. As mentioned, always attempt to verify a model with
known data before extending the model to unknown regions.
Lumkes J. Design, Simulation, and Testing of an Energy Storage Hydrostatic Vehicle Transmission and
Controller. Doctoral Thesis, University of WisconsinMadison, 1997.
554 Chapter 12
Yamaguchi J. Mitsubishi MBECS III is the Latest in the Diesel/hydraulic Hybrid City Bus Series.
Automotive Engineering, June 1997, pages 2930.
Applied Control Methods 555
when hydraulic assist is provided. At higher speeds, or steady state, the bus is driven
by the engine alone. A single dry-plate clutch/synchromesh gear unit connects the
pump/motors to the mechanical driveline. With the hydraulic assist, the bus is cap-
able of using up to third gear to begin moving. The controller for this system can be
much simpler and many features cannot be implemented.
The series conguration allows for more features than simpler designs. In this
case, a pump/motor is located at each wheel, as shown in Figure 70. This congura-
tion allows many options:
Decoupling of engine from road load;
Regenerative braking;
All-wheel drive;
Antilock braking systems (ABS);
Traction control;
Ability to deactivate several pump/motors for greater system eciency;
Hydrostatic accessory drives;
Adaptable to variable and/or active suspension systems.
Once the hardware is installed in the series system, the addition of features can
largely be accomplished through software. ABS and traction control, assuming
there are wheel speed sensors, can be completely integrated into the controller at a
lower cost than possible for current vehicles. Also, by selectively using the wheel
pump/motors greater system eciencies are possible.
Fronczak F, Beachley N. An Integrated Hydraulic Drive Train for Automobiles. Proceedings, 8th
International Symposium on Fluid Power, Birmingham, England, April 1988.
556 Chapter 12
For these reasons we examine the hardware and software used in the initial
design of a controller to optimize the eciency of a series hydrostatic transmission
with energy storage. To control this system we will use a multiple-input multiple-
output controller utilizing centralized (engine speed and strategy) and distributed
controllers (all displacement controllers) and both analog and digital controllers. In
this way we can maximize performance and resources to develop a working con-
troller and demonstrate, less abstractly, the concepts presented in this text.
The computer is the central control component in the vehicle. It performs some
of the control algorithms, monitors all subsystems for failure, and determines the
overall operating strategy for the vehicle, which in large part with the component
eciency determines the eciency of the vehicle
Table 4 summarizes the channel inputs, items read, and types of signals. As
controller algorithms change, it may be necessary to adjust what is being monitored.
For example, if an open loop engine controller is developed to the point where the
condence level is high, the engine speed would not be necessary. In addition, the
engine temperature, once the vehicle conguration and cooling system is tested,
would not be needed. The gauges mounted on the dashboard could monitor these
items, along with the engine control module (ECM).
The data acquisition outputs are summarized in Table 5. Three analog outputs
are used for the wheel pump/motor and engine-pump displacement commands.
These distributed controllers use these voltage commands for setting the desired
displacement. Nine digital outputs are used to control the solenoids, engine ignition
and starting, and throttle position. The solenoids are used to shut down the system
(two modes) and isolate the accumulator from the system, both for safety and
operating procedures. The digital outputs, when used for the solenoids and engine,
must be operated through a high-current relay, since the signal levels are very small.
This is a relatively simple computer conguration and would easily be imple-
mented using an embedded microcontroller in production applications. The PC data
acquisition system provides a cost-eective way to design and test many controllers
very quickly.
used for measuring the engine-pump displacement. The accelerator and brake pedals
also use potentiometers to determine position. For all potentiometers, the voltage
was regulated to 10 V, giving maximum resolution, since the computer was cong-
ured for 010 V also.
The signals from the potentiometers and engine temperature sensor are all
voltages in the ranges capable of being read by the computer data acquisition
boards. The pressure transducers have current outputs, which are more immune
to line noise. At the connection on the data acquisition boards, the current is passed
through 400 resistors to convert the signals to voltages. The current ranges from 4
to 20 mA, varying linearly with the pressure. An eort was made to reduce the
number of sensors, developing a control system using sensors likely to be available
in production vehicles. Torque transducers and ow meters are avoided for this
reason.
ed number of samples. Finally, data can be stored to the hard disk to track con-
troller performance, although this does slow down the sampling rate. This particular
controller is designed to utilize program-controlled delays. No explicit delays are
used to maximize the sample rate, although some routines are not called every pass.
The strategy routine implements the overall controller strategy about how to
control the vehicle and where each subsystem should be operating. Several routines
are called from the strategy routine (Figure 73). First, if there are no input com-
mands and the engine is not running, the accumulator is isolated from the system to
prevent leakage from occurring across the pistons in the wheel pump/motors and
engine pump. When a command (brake or gas) signal is received, the controller
connects the accumulator to the main system and changes the displacement of the
wheel pump/motors proportional to the command signal. Being over-center units,
achieving braking and acceleration is easily achieved. Next, the engine is checked and
if the accumulator pressure needs to be recharged, it begins the sequence to start the
engine. If the engine does not start after a set number of loops, it shuts down the
system. Once the engine recharges the accumulator, it is shut o until the next time
the accumulator needs to be charged.
Applied Control Methods 561
The engine controller routine starts the engine and controls the engine speed
using a stepper motor connected to the throttle (Figure 74). The hydrostatic pump
mounted to the engine is controlled by varying its displacement. Since it is computer
controlled it can be set to optimize the overall system eciency. To control for
maximum eciency, the whole system is analyzed and the pump eciency curves
and engine eciency curves are known and then the product of those is maximized to
provide the correct operating point. This could be enhanced further by going to a
feedforward routine where the desired pump displacement and stepper motor posi-
tion is precalculated as a function of system pressure and accessed via a lookup table.
Thus, without all the details, we should see how a sample controller might be con-
gured. The goal of building such a system is to verify the design and allow extended
vehicle testing.
Now a quick conclusion relating earlier material to how dierent controllers
may be used to change/improve the existing initial design. First, most components in
the system were modeled and tested to verify operation within the whole system.
Each subcontroller could constitute another study of the control system design
process. The complete vehicle was also modeled and simulated on the Federal
Urban Driving Cycle. This provided much useful input for designing the controller:
expectations of fuel economy, sizing requirements for components (acceleration and
braking rates), dynamic bandwidth required by each controller, and engine on-o
cycles. This simulation provided many of the guidelines necessary to design the
complete system. The wheel pump/motor displacement controllers were tested to
ensure that they met the required bandwidth requirements for the vehicle. Having
these models would now allow the controller to be designed for the next stage
global optimization of system eciency. This case study veried the overall control-
ler strategy and stability of the system. Also, the operating mode when the accumu-
562 Chapter 12
lator is not in the system needs to be implemented for cases where the accumulator
pressure is too low to perform a maneuver (i.e., passing another vehicle) and the time
to charge the accumulator is not available. This mode of operation was studied and
tested in the lab using sliding mode control and PID algorithms. This mode was
initially examined separately from the vehicle due to its higher complexity and
potential stability/safety problems. The goal of the controller in this mode is to
control the engine throttle, engine-pump displacement, and wheel pump/motor dis-
placements and thus control the torque applied to the wheels while maintaining
maximum system eciency. Since it is desired to never relieve system pressure
with a relief valve (i.e., adding major energy losses), the throttle and displacements
must always be matched to avoid overpressurization. When the accumulator is in the
system, it essentially (at least relative to loop times) makes the system see a con-
stant pressure and the actual controller strategy is simplied. Implementation is just
as dicult in both cases.
In conclusion, an actual PC-based control system for a hydrostatic transmis-
sion with regenerative braking has been presented as an example of how the tech-
niques and hardware studied earlier might be used in actual product research and
development.
Lewandowski E. Control of a Hydrostatic Powertrain for Automotive Applications. Ph.D. Thesis,
University of WisconsinMadison, 1990.
Applied Control Methods 563
12.9 PROBLEMS
12.1 Locate an article describing a unique application of industrial hydraulics and
briey summarize the system and how it is used.
12.2 Locate an article describing a unique application of mobile hydraulics and
briey summarize the system and how it is used.
12.3 Briey list and describe the three functions of control valves.
12.4 A pressure reducing valve regulates the downstream pressure. (T or F)
12.5 What two types of pressure control valves can use smaller springs with higher
pressures?
12.6 If the orice is removed in a pilot operated pressure control valve, the valve
still regulates the pressure. (T or F)
12.7 List an advantage and disadvantage of spool type pressure control valves.
12.8 Graphically demonstrate what pressure override is.
12.9 Counterbalance valves combine the functions of what two valves?
12.10 Pressure reducing valves are an ecient means of reducing the pressure in our
system. (T or F)
12.11 Most ow control valves use pressure as the feedback variable. (T or F)
12.12 Tandem center directional control valves combine what two types of center
congurations?
12.13 The center conguration of servovalves is of what type?
12.14 List several advantages to using electronic feedback on proportional valves.
12.15 For a position control system requiring high accuracy, the most applicable
type of valve is ______________.
12.16 What is the feedback device in a typical apper nozzle servovalve?
12.17 Two disadvantages of servovalves are ____________ and _____________.
12.18 Using the two valve coecients below, what is the total valve coecient?
in in
KPA 0:34 p and KBT 0:45 p
sec psi sec psi
12.19 Pressure metering curves show important characteristics for what type of
control?
12.20 In valve controlled cylinders, what is unique about the force located at 2/3 of
the stall force?
12.21 Under what conditions is a xed displacement/closed center (FD/CC) system
the most ecient?
12.22 List one advantage and one disadvantage of using regeneration during exten-
sion.
12.23 Describe an advantage of load sensing circuits.
12.24 Design a hydraulic circuit capable of meeting the following specications
when the system pressure is 1500 psi, the control valve has matched orices and is
symmetrical, and the cylinder is mounted horizontally and has a stroke of 10 inches.
Specications:
Maximum extension force 12,000 lbs (all forces opposed to motion)
Maximum retraction force 7000 lbs
Extension velocity of 4 in/sec at a load of 5000 lbs
564 Chapter 12
12.27 Given the circuit shown in Figure 76, answer the following questions:
a. What is the max cylinder extension speed?
b. What is the max cylinder extension force?
c. What is the max cylinder retraction speed?
d. What is the bore end pressure during retraction?
e. What will the cylinder do when the valve is centered?
Quadratic Equation
Polynomial:
ar2 br c 0
Roots:
p
b b2 4ac
r1 ; r2
2a 2a
Eulers Theorem
Matrix Definitions
Square Matrix
A matrix with m rows and n columns is square if m n.
Column Vector
1 n matrix, all values in one column.
Row Vector
m 1 matrix, all values in one row.
Symmetrical Matrix
aij aji .
Identity Matrix
Square matrix where all diagonal elements 1 and all o diagonal elements 0.
Using matrix subscript notation it can be dened as:
567
568 Appendix A
(
1 ij
Iij
0 i 6 j
Matrix Addition
C A B; cij aij bij .
Matrix Multiplication
If multiplication order is AB, then the number of columns of A must equal the
number of rows in B. If A is m n and B is n q, resulting matrix will be m q.
The product C AB is found by multiplying the ith row of A and the jth column of
B to the element cij .
X
q
cij aik
bkj and AB 6 BA
k1
Transpose
Denoted AT , is found by interchanging the rows and columns; it is dened by
aij T aji . Will turn a column vector into a row vector, and vice versa.
Adjoint
The adjoint of a square matrix, A, denoted Adj A, can be found by the following
sequence of operations on A:
1. Find the minor of A, denoted M. Elements mij are found by evaluating
det A with row i and column j deleted.
2. Find the cofactor of A, denoted C. Element cij 1ij mij .
3. Find the adjoint from the cofactor transposed: Adj A CT .
Inverse
The inverse of a square matrix A, denoted A1 , is
AdjA
A1
detA
If the determinant is zero, the matrix inverse does not exist and the matrix is said to
be singular.
Appendix B
Laplace Transform Table
1 1 t 1
Unit Impulse
2 ekTs t kT zk
k is any integer Delayed Impulse (Output is 1 if t kT, 0 if t 6 kT
3 1 ut 1 for t 0 z
s Unit Step z1
4 1 t Tz
s2 Unit Ramp z 12
5 1 t2 T 2 zz 1
s3 Unit Ramp 2 z 13
6 1 eat z
sa z eaT
7 1 teat TzeaT
s a2 z eaT
2
8 1 1 z 1 eaT
1 eat
ss a a az 1 z eaT
9 ba eat ebt z eaT ebT
s as b
z eaT z ebT
10 1 1 eat ebt
ss as b
ab ab a ba b
1 z bz az
ab z 1 a bz eaT a bz ebT
569
570 Appendix B
s
13 1 ateat z z eaT 1 aT
s a2 2
z eaT
14 b sinbt z sinbT
s2 b2 z2 2z cosbT 1
15 s cosbt zz cosbT
s2 b2 z2 2z cosbT 1
z Az B
2 aT
z 1 z 2ze cosbT e2aT
a
A 1 eaT cosbT eaT sinbT
b
2aT a aT
Be e sinbT eaT cosbT
b
Data extraction
Model characteristics
Conversions
ssConversion to state space.
zpkConversion to zero/pole/gain.
tfConversion to transfer function.
c2dContinuous to discrete conversion.
d2cDiscrete to continuous conversion.
d2dResample discrete system or add input delay(s).
Model dynamics
pole, eigSystem poles.
tzeroSystem transmission zeros.
pzmapPole-zero map.
dcgainD.C. (low frequency) gain.
normNorms of LTI systems.
covarCovariance of response to white noise.
dampNatural frequency and damping of system poles.
esortSort continuous poles by real part.
dsortSort discrete poles by magnitude.
padePade approximation of time delays.
State-space models
rss,drssRandom stable state-space models.
ss2ssState coordinate transformation.
canonState-space canonical forms.
ctrb, obsvControllability and observability matrices.
gramControllability and observability gramians.
ssbalDiagonal balancing of state-space realizations.
balrealGramian-based input/output balancing.
modredModel state reduction.
minrealMinimal realization and pole/zero cancellation.
augstateAugment output by appending states.
Time response
stepStep response.
impulseImpulse response.
General Matlab Commands 573
Frequency response
bodeBode plot of the frequency response.
sigmaSingular value frequency plot.
nyquistNyquist plot.
nicholsNichols chart.
ltiviewResponse analysis GUI.
evalfrEvaluate frequency response at given frequency.
freqrespFrequency response over a frequency grid.
marginGain and phase margins
System interconnections
appendGroup LTI systems by appending inputs and outputs.
parallelGeneralized parallel connection (see also overloaded ).
seriesGeneralized series connection (see also overloaded ).
feedbackFeedback connection of two systems.
starRedheer star product (LFT interconnections).
connectDerive state-space model from block diagram description.
Demonstrations
ctrldemoIntroduction to the Control System Toolbox.
jetdemoClassical design of jet transport yaw damper.
diskdemoDigital design of hard-disk-drive controller.
milldemoSISO and MIMO LQG control of steel rolling mill.
kalmdemoKalman lter design and simulation.
Bibliography
Airy, G., On the Regulator of the Clock-work for Eecting Uniform Movement of
Equatorials, Memoirs of the Royal Astronomical Society, vol. II, pp. 249267,
1840.
Ambardar, A., Analog and Digital Signal Processing, International Thomson
Publishing, 1995, ISBN 0-534-94086-2.
Anderson, W., Controlling Electrohydraulic Systems, Marcel Dekker, Inc., 1988,
ISBN 0-8247-7825-1.
Anton, H.,Elementary Linear Algebra, 7th ed., John Wiley & Sons, Inc., 1994.
Astrom, K., and Wittenmark, B., Adaptive Control, Addison-Wesley Publishing
Company, Inc., 1995, ISBN 0-201-55866-1.
Astrom, K., and Wittenmark B., Computer Controlled Systems. Theory and
Design, 3rd ed., Prentice Hall, 1997, ISBN 0-13-314899-8.
Bishop, R., Modern Control Systems Analysis and Design using Matlab and
Simulink, Addison Wesley Longman, Inc., 1997, ISBN 0-201-49846-4.
Bollinger, J., and Due, N., Computer Control of Machines and Processes,
Addison-Wesley Publishing Company, 1988, ISBN 0-201-10645-0.
Bolton, W., Mechatronics, Electronic Control Systems in Mechanical
Engineering, Addison Wesley Longman Limited, 1995, ISBN 0-582-25634-8.
Chalam, V., Adaptive Control Systems, Techniques and Applications, Marcel
Dekker, Inc., 1987, ISBN 0-8247-7650-X.
Ciarlet, P.G., Introduction to Numerical Linear Algebra and Optimisation, page
119, Cambridge: Cambridge University Press, 1991.
DAzzo, J., Houpis, C., Linear Control System Analysis and Design, Conventional
and Modern, 3rd edition, McGraw-Hill Book Company, 1988, ISBN 0-07-
100191-3.
Dorf, R., and Bishop, R., Modern Control Systems, 8th edition, Addison Wesley
Longman, Inc., 1998, ISBN 0-201-30864-9.
Driankov, D., Hellendoorn, H., and Reinfrank, M., An Introduction to Fuzzy
Control, Springer-Verlag, 1996.
Driels, M., Linear Control Systems Engineering, McGraw-Hill, Inc., 1996, ISBN
0-07-017824-0.
Dutton, K, Thompson, S., and Barraclough, B., The Art of Control Engineering,
Addison Wesley Longman Limited, 1997, ISBN 0-201-17545-2.
575
576 Bibliography
Evans, W., Graphical Analysis of Control Systems, Transactions AIEE, vol. 67,
pp. 547551, 1948.
Franklin, G., Powell, J., and Emami-Naeini, A., Feedback Control of Dynamic
Systems, 3rd edition, Addison-Wesley Publishing Company, 1994, ISBN 0-201-
52747-2.
Franklin, G., Powell, J., and Workman, M., Digital Control of Dynamic Systems,
3rd edition, Addison Wesley Longman, Inc., 1998, ISBN 0-201-82054-4.
Fuller, A., The Early Development of Control Theory, Journal of Dynamics
Systems, Measurement, and Control, ASME, vol. 98G, no. 2, pp. 109118,
June 1976.
Gupta, M., Adaptive Methods for Control System Design, IEEE Press, 1986,
ISBN 0-87942-207-6.
Histand, M., and Alciatore, D., Introduction to Mechatronics and Measurement
Systems, WCB McGraw-Hill, 1999, ISBN 0-07-029089-X.
Johnson, J., Basic Electronics for Hydraulic Motion Control, Penton Publishing
Inc., 1992, ISBN 0-932905-07-2.
Johnson, J., Design of Electrohydraulic Systems for Industrial Motion Control,
2nd edition, Jack L. Johnson, PE, 1995.
Kandel, A., and Langholz, G., Fuzzy Control Systems, CRC Press, 1994, ISBN
0-8493-4496-4.
Kraus, T., and Myron, T., Self-tuning PID uses Pattern Recognition Approach,
Control Engineering, June 1984.
Leonard, N., and Levine, W., Using Matlab to Analyze and Design Control
Systems, 2nd edition, The Benjamin/Cummings Publishing Company, Inc.,
1995, ISBN 0-8053-2193-4.
Lewis, F. L., Applied Optimal Control and Estimation, Prentice Hall, 1992.
Maciejowski, J., Multivariable Feedback Design, Addison-Wesley Publishers
Ltd., 1989, ISBN 0-201-18243-2.
Mayr, O., The Origins of Feedback Control, Cambridge: MIT Press, 1970.
McNeill, F., and Thro, E., Fuzzy Logic, A Practical Approach, Academic Press
Limited, 1994, ISBN 0-12-485965-8.
Merritt, H., Hydraulic Control Systems, John Wiley & Sons, Inc., 1967, ISBN
0-471-59617-5.
Nise, N., Control Systems Engineering, 2nd edition, Addison-Wesley Publishing
Company, 1995, ISBN 0-8053-5424-7.
Ogata, K., Modern Control Engineering, 2nd edition, Prentice-Hall, Inc., 1990,
ISBN 0-87692-690-1.
Ogata, K., System Dynamics, 3rd ed. New Jersey: Prentice-Hall, 1998.
Palm, W., Control Systems Engineering, John Wiley & Sons, Inc., 1986, ISBN
0-0471-81086-X.
Passino, K., and Yurkovich, S., Fuzzy Control, Addison-Wesley Longman, Inc.,
1998, ISBN 0-201-18074-X.
Saadat, H., Computational Aids in Control Systems using MATLAB, McGraw-
Hill, 1993, ISBN 0-07-911358-3.
Schwarzenbach, J., Essentials of Control, Addison-Wesley Longman Limited,
1996, ISBN 0-582-27347-1.
Shetty, D., and Kolk, R., Mechatronics System Design, PWS Publishing
Company, 1997, ISBN 0-534-95285-2.
Bibliography 577
Slotine, J., and Li, W., Applied Nonlinear Control, Prentice-Hall, Inc., 1991,
ISBN 0-13-040890-5.
Spong, M., and Vidyasagar, M., Robot Dynamics and Control, John Wiley &
Sons, Inc., 1989, ISBN 0-471-50352-5.
Stier, A., Design with Microprocessors for Mechanical Engineers, McGraw-
Hill, Inc., 1992, ISBN 0-07-061374-5.
Tonyan, M., Electronically Controlled Proportional Valves, Selection and
Application, Marcel Dekker, Inc., 1985, ISBN 0-8247-7431-0.
Van de Vegte, J., Feedback Control Systems, 3rd edition, Prentice-Hall Inc., 1994,
ISBN 0-13-016379-1.
Wang, L., A Course in Fuzzy Systems and Control, Prentice Hall, 1997.
Answers to Selected Problems
Problem Answer(s)
2.5 Cs s3 1
3
Rs s 2s s 2
2 3 2 3
2.14 0 1 0 0 0 0
6 7 6 7
6 1=a b=a 0 07 6 20=a 0 7
6 7 6 7
A6 7B 6 7C 0 0 1 0
6 0 0 0 1 7 6 0 0 7
4 5 4 5
0 0 1=c 1=c 0 1=c
2.18 mx 0 b1 x_ 0 k1 k2x k2 xi
3.1 1 1
1=20s; f t 1 e20t
4
3.3 Ys 10
Us s3 3s2 4s 9
3.4 Ys 5s 1
Us s3 5s2 32
3.6 1
ct 2t 1 e2t
2
3.12 2
Gs
5s 1
579
580 Answers
Problem Answer(s)
3.13 36
Gs
s2 4s 36
4.10 css 10
4.17 3
4.21 237 s 1s 4
GHs ; PM 105
s s 10s 30
5.9 k 2; p 2
5.13 K 59
5.21 K 14
5.33 K1 4, K2 3
6.11 Current
Answers 581
Problem Answer(s)
6.19 False
6.23 Transistor
6.25 False
7.14 Ys 1
a
Rs s2 5s 6
Yz 0:1156z 0:02134
b 2
Rz z 0:1851z 0:006738
c 0:1156; 0:1583; 0:1655; 0:1665; 0:1666; 0:1667; 0:1667; 0:1667;
" # " #" # " # " #
7.17 x1 k 1 2 1 x1 k 1
x1 k
1 ck 1 0
x2 k 1 1 0 x2 k 0 x2 k
7.18 T
Cz Rz
2z 1
7.19 Yz 0:3
Rz z 0:5
7.20 Yz 0:2z2
2
Rz z 0:5z 0:3
8.8 a y1 1
0:368z 0:264
b Gz
z2 1:368z 0:368
c 0; 0:3679; 1:1353; 2:0498; 3:0183; 4:0067; 5:0025
d y1 1
8.10 0
T
0:2
Problem Answer(s)
9.6 z 0:1353
Gc z 0:4551
z 0:6065
9.7 2 Tz T 2
Gc z
0:2 Tz T 0:2
9.10 K 6:3
10.15 Absolute
11.20 Cz 0:36
a 0:82; b 0:36
Rz z 0:82
12.4 True
12.6 False
Problem Answer(s)
12.11 True
585
586 Index
FAM (see Fuzzy association matrix) Fuzzy association matrix (FAM), 484, 487
Feasible trajectory, 387388, 442 Fuzzy logic, 478489
Feedback linearization, 477478 crisp vs. fuzzy, 480
Feedforward compensation, 437448 membership function, 480
adaptive algorithms, 473 Fuzzy sets, 480
command feedforward, 441443
disturbance input rejection, 438440 Gain margin, 118, 174178, 258, 261
Field eect transistor (FET), 422, 426427 in controller design, 228233
Filter, 307308 Gain scheduling, 470471
active, 305, 307 Genetic algorithm, 491
aliasing eects, 319 Global stability, 153, 474476
integrated circuit, 308 Gray code, 417418
passive, 305, 307 Guided poppet, pressure control valve,
Final value theorem (FVT), 100, 348349 498499
Finite dierences, 316
First-order system: Half duplex, 313314
normalized Bode plot, 111112 Half-step, 420
normalized step response, 81 Hall eect transducer, 294295, 418
Flapper nozzle, 512513 Hamilton, W., 4
Flash converter, 404 Hardware in the loop, 405
Float regulator, 3 Hazen, H., 5
Flow: Hedging, fuzzy logic, 481
chart, programming, 559562 Height, fuzzy logic, 481
control circuit, 536540 Heuristic approach, 479
control valve, 505506 Higher-level buses, 413414
equations, valve, 514523 High frequency asymptote, 111
gain, valve, 520 High pass lter, 307
meter, 289290 Hi-lo circuit, hydraulic, 537538
metering characteristics, valve, 519520 Hold, 325
paths, control valve, 514 Holding torque, 420
Fluid power: HST (see Hydrostatic transmission)
circuits, 534542 Hurwitz, A., 4
bleed-o, 539 Hybrid vehicle, 553562
hi-lo, 537538 Hydraulic (see also Fluid power):
meter-in/out, 538539 accumulator, 60, 6162
regenerative, 540 position control system, 285287
components, 495 pump/motor, 58, 299
actuators, 296300, 524 Hydraulic system:
control valves, 496523 case study, 5865
strategies, 533534 industrial vs. mobile, 495496
power management, 540543 modeling, 3839
Flyback diode, 427, 430431 Hydropneumatic accumulator, 6162
Forgetting factor, 469 Hydrostatic transmission (HST), control
Forward dierence, 366 of, 553562
Forward rectangular rule, 337338, 366 Hysteresis, 474475
Frequency: in electrohydraulic valves, 513, 520
PWM, 428 in pressure control valves, 500
to voltage converter, 559
Frequency response (see Bode plots) Idle time, 534
Full duplex, 313314 IGBT (see Insulated-gate bipolar
Full state observer, 454 transistor)
Fuzzication, 483 Impedance, 403, 426
590 Index