Sei sulla pagina 1di 194

Control of Systems

with Constraints
Ph.D. Thesis

by Martin Bak

Department of Automation
Technical University of Denmark

November 2000

Department of Automation
Technical University of Denmark
Building 326
DK-2800 Kongens Lyngby
Denmark
Phone: +45 45 25 35 48

First edition
c Department of Automation, 2000
Copyright

The document was typeset using LATEX, drawings were made in Xfig, graphs were
generated in MATLAB, a registered trademark of The MathWorks Inc.

Printed in Denmark at DTU, Kongens Lyngby


Report no. 00-A-898
ISBN 87-87950-85-5

Preface

This thesis is submitted as a partial fulfillment for the degree of Doctor of Philosophy (Ph.D.) in the subject area of electrical engineering. The work was carried out
from September 1997 to November 2000 at the Department of Automation, Technical University of Denmark. Supervisors on the project was Associate Professor Ole
Ravn, Department of Automation, and Associate Professor Niels Kjlstad Poulsen,
Department of Mathematical Modelling.
Several people have contributed to make the project and the writing of this thesis
much more pleasant than I expected it to be. I am thankful to my supervisors for
their guidance and constructive criticism during the study, and to Claude Samson
and Pascal Morin who made my stay in 1998 at INRIA in Sophia Antipolis, France
possible. A special thanks goes to my colleague and office-mate Henrik Skovsgaard
Nielsen for many comments and valuable discussions throughout the three years.
Thanks also to Thomas Bak for proofreading an early and more challenging version
of the manuscript and for contributing to improve the standard of the thesis.
Kongens Lyngby, November 2000
Martin Bak

Abstract

This thesis deals with the problem of designing controllers to systems with constraints. All control systems have constraints and these should be handled appropriately since unintended consequences are overshoots, long settling times, and
even instability. Focus is put on level and rate saturation in actuators but aspects
concerning constraints in autonomous robot control systems are also treated. The
complexity of robotic systems where several subsystems are integrated put special
demands on the controller design.
The thesis has a number of contributions. In the light of a critical comparison of
existing methods to constrained controller design (anti-windup, predictive control,
nonlinear methods), a time-varying gain scheduling controller is developed. It is
computational cheap and shows a good constrained closed-loop performance.
Subsequently, the architecture of a robot control system is discussed. Importance
is attached to how and where to integrate the handling of constraints with respect
to trajectory generation and execution. This leads to the layout of a closed-loop
sensor-based trajectory control system.
Finally, a study of a mobile robot is given. Among other things, a receding horizon
approach to path following is investigated. The predictive property of the controller
makes the robot capable of following sharp turns while scaling the velocities. As a
result, the vehicle may follow an arbitrary path while actuator saturation is avoided.

Resume

Denne afhandling omhandler design af regulatorer til systemer med begrnsninger.


Alle reguleringssystemer har begrnsninger og disse br handteres pa passende
vis, da utilsigtede virkninger kan vre oversving, lange indsvingningsforlb og
ustabilitet. Der fokuseres specielt pa niveau- og hastighedsbegrnsninger i aktuatorer, men ligeledes beskrives problemstillinger omkring behandling af begrninger i autonome robotstyresystemer, hvor kompleksiteten og integrationen
af flere undersystemer stter specielle krav til reguleringsdelen.
Afhandlingen bidrager med flere ting. Med udgangspunkt i en kritisk sammenligning af eksisterende metoder til begrnsede reguleringssystemer (anti-windup,
prdiktiv regulering, og ulinere metoder), udvikles en tidsvarierende regulator, der beregningsmssigt er billig og som udviser gode lukket-sljfeegenskaber.
Metoden bygger pa en skalering af forstrkningen.
Efterflgende diskuteres behandlingen af begrnsninger i robotstyresystemer, hvor
vgten er lagt pa arkitekturen og implementeringen af lukket-sljfe instrumentbaseret kurvegenerering og -eksekvering.
Afhandlingen afsluttes af et studie af forskellige aspekter af et styresystem til en
mobil robot. Der udvikles bl.a. en prdiktiv regulator, der gr robotten i stand
til at forudse rutemssigt skarpe knk og pa passende vis skalere hastigheden,
mens svinget foretages. Dette sikrer, at aktuatormtning undgas samtidig med at
kretjet forbliver pa den planlagte rute.

Contents

Preface

iii

Abstract

Resume

vii

List of Figures

xvii

List of Tables

xx

Nomenclature

xxi

Introduction
1.1

Motivation and Background

. . . . . . . . . . . . . . . . . . . .

Constraints in Control Systems . . . . . . . . . . . . . . .

1.2

Research in Constrained Control Systems . . . . . . . . . . . . .

1.3

Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.4

Objectives and Organization of the Thesis . . . . . . . . . . . . .

1.1.1

x
2

Contents
Controller Design for Constrained Systems

11

2.1

Control Design Strategies . . . . . . . . . . . . . . . . . . . . . .

11

2.2

Anti-windup and Bumpless Transfer . . . . . . . . . . . . . . . .

12

2.2.1

Ad hoc methods . . . . . . . . . . . . . . . . . . . . . .

16

2.2.2

Classical Anti-Windup . . . . . . . . . . . . . . . . . . .

17

2.2.3

Observer-based Anti-Windup

. . . . . . . . . . . . . . .

20

2.2.4

Conditioning Technique . . . . . . . . . . . . . . . . . .

24

2.2.5

Generic Frameworks . . . . . . . . . . . . . . . . . . . .

27

2.2.6

Bumpless Transfer . . . . . . . . . . . . . . . . . . . . .

30

2.2.7

Comparison of Methods . . . . . . . . . . . . . . . . . .

35

Predictive Controller Design . . . . . . . . . . . . . . . . . . . .

36

2.3.1

Overview of Criteria Selection . . . . . . . . . . . . . . .

38

2.3.2

Process Models . . . . . . . . . . . . . . . . . . . . . . .

40

2.3.3

Predictions . . . . . . . . . . . . . . . . . . . . . . . . .

42

2.3.4

Unconstrained Predictive Control . . . . . . . . . . . . .

43

2.3.5

Constrained Predictive Control . . . . . . . . . . . . . . .

45

2.3.6

Design of a Predictive Controller . . . . . . . . . . . . . .

50

Nonlinear and Time-Varying Methods . . . . . . . . . . . . . . .

50

2.4.1

Rescaling . . . . . . . . . . . . . . . . . . . . . . . . . .

51

2.4.2

Gain Scheduling . . . . . . . . . . . . . . . . . . . . . .

55

2.4.3

Conclusions . . . . . . . . . . . . . . . . . . . . . . . . .

63

2.5

Comparison of Constrained Control Strategies . . . . . . . . . . .

63

2.6

Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

65

2.6.1

PI Control of Single Integrator . . . . . . . . . . . . . . .

66

2.6.2

Anti-Windup Control of Double Tank System with Saturation 67

2.6.3

Constrained Predictive Control of Double Tank with Magnitude and Rate Saturation . . . . . . . . . . . . . . . . .

78

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

80

2.3

2.4

2.7

Contents

xi

Robot Controller Architecture

83

3.1

Elements of a Robot Control System . . . . . . . . . . . . . . . .

84

3.1.1

Constraints . . . . . . . . . . . . . . . . . . . . . . . . .

87

Trajectory Generation and Execution . . . . . . . . . . . . . . . .

87

3.2.1

Design Considerations . . . . . . . . . . . . . . . . . . .

88

3.2.2

Specification of the Trajectory . . . . . . . . . . . . . . .

90

3.2.3

Existing Schemes for Trajectory Control . . . . . . . . . .

90

General Architecture . . . . . . . . . . . . . . . . . . . . . . . .

92

3.3.1

Considerations . . . . . . . . . . . . . . . . . . . . . . .

95

3.4

Architecture for Mobile Robot . . . . . . . . . . . . . . . . . . .

95

3.5

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

98

3.2

3.3

Case Study: a Mobile Robot

99

4.1

The Mobile Robot . . . . . . . . . . . . . . . . . . . . . . . . . . 100

4.2

Localization with Calibration of Odometry . . . . . . . . . . . . . 102

4.3

4.2.1

Odometry and Systematic Errors . . . . . . . . . . . . . . 104

4.2.2

Determination of Scaling Error . . . . . . . . . . . . . . . 105

4.2.3

Determination of Systematic Odometry Errors . . . . . . 107

4.2.4

Conceptual Justification of Approach . . . . . . . . . . . 108

4.2.5

Experimental Results . . . . . . . . . . . . . . . . . . . . 114

4.2.6

Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 116

Receding Horizon Approach to Path Following . . . . . . . . . . 117


4.3.1

Path Following Problem Formulation . . . . . . . . . . . 118

4.3.2

Constraints . . . . . . . . . . . . . . . . . . . . . . . . . 119

4.3.3

Linear Controller with Velocity Scaling . . . . . . . . . . 121

4.3.4

Nonlinear Receding Horizon Approach . . . . . . . . . . 124

4.3.5

Linear Receding Horizon Approach . . . . . . . . . . . . 127

4.3.6

Conceptual Justification of Approach . . . . . . . . . . . 132

xii

Contents

4.4

4.5
5

4.3.7

Experimental Results . . . . . . . . . . . . . . . . . . . . 135

4.3.8

Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 138

Posture Stabilization with Noise on Measurements . . . . . . . . 139


4.4.1

Periodically Updated Open-loop Controls . . . . . . . . . 139

4.4.2

Noise Sensitivity . . . . . . . . . . . . . . . . . . . . . . 142

4.4.3

Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

4.4.4

Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 148

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

Conclusions

149

A Linearized System for Estimating Systematic Errors

153

B Closed-loop Solution to Chained System

155

Bibliography

157

List of Figures

1.1

Input and output response to a reference step change for a PI controlled unstable system with actuator saturation (left) and without
saturation (right). . . . . . . . . . . . . . . . . . . . . . . . . . .

General control system with various constraints, limitations, and


disturbances. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.3

Typical saturation characteristic. . . . . . . . . . . . . . . . . . .

2.1

Common system notation for AWBT schemes. The unconstrained


controller output is denoted u while the (possibly) constrained pro^. The controller may be 1 or 2 DoF. . . . .
cess input is denoted u

14

2.2

A PID Controller structure with classical anti-windup. . . . . . . .

18

2.3

PI controller with simple AWBT compensation. . . . . . . . . . .

19

2.4

Traditional combination of an observer and a state feedback. In


presence of saturation, an inconsistency exists between the observer and system states. . . . . . . . . . . . . . . . . . . . . . .

21

1 DoF state space controller with observer-based AWBT compensation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

22

1.2

2.5
2.6

Hanus conditioning technique. C (s) represents the controller transfer function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

xiv

List of Figures
2.7

The framework of Edwards and Postlethwaite. . . . . . . . . . . .

29

2.8

Bumpless transfer using a controller tracking loop. . . . . . . . .

33

2.9

Bi-directional bumpless transfer. The active control loop is indicated with bold lines. Subscripts A and L denote active and latent
controller, respectively, and superscript T indicates tracking controller. At the point of transfer, the four switches (dashed lines) are
activated. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

34

2.10 Model and criterion based control for a possibly time-varying system. 37
2.11 Dynamic rescaling. . . . . . . . . . . . . . . . . . . . . . . . . .

51

2.12 Time response for the saturation scaling factor . . . . . . . . . .

56

2.13 Root locus for a fourth order integrator chain with closed-loop
poles as a function of  for nonlinear (left) and linear (right) scaling.  and indicate nominal ( = 1) and open-loop poles
( ! 1), respectively. . . . . . . . . . . . . . . . . . . . . . . .

58

2.14 The choice of _ . . . . . . . . . . . . . . . . . . . . . . . . . . . .

59

2.15 Sketch illustrating the scaling factor when input is magnitude saturated. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

60

2.16 Time response with saturation and gain scheduling (solid) using
= 1 and N = 10. For comparison the unconstrained linear
response is shown (dashed). The uncompensated constrained response is unstable. . . . . . . . . . . . . . . . . . . . . . . . . .

62

2.17 Time response for N = 10, = 0:2 (solid), = 1 (dashed), and


= 3 (dash-dotted). . . . . . . . . . . . . . . . . . . . . . . . .

62

2.18 Time response for = 1, N = 4 (solid), N = 10 (dashed), and


N = 100 (dash-dotted). . . . . . . . . . . . . . . . . . . . . . . .

62

2.19 Time response for = 1, N = 10, and x1 (0) = 1 (solid), x1 (0) =


50 (dashed), and x1 (0) = 100 (dash-dotted). . . . . . . . . . . . .

63

2.20 Double tank system. . . . . . . . . . . . . . . . . . . . . . . . . .

68

2.21 Time response for the unconstrained system (solid) and the constrained system without anti-windup (dashed). . . . . . . . . . . .

70

2.22 The marginal stable response for the constrained system with a
saturation-wise stable controller. . . . . . . . . . . . . . . . . . .

70

List of Figures

xv

2.23 Time response for the constrained system for different choices of
the anti-windup gain matrix. The unconstrained response (dashed),
the constrained response (solid), and the input (dotted) are shown
for anti-windup gains (a) Ma , (b) Mk , (c) Mh , and (d) Ms . . . . .

71

2.24 Eigenvalues of FM = F M ()H using the conditioning technique with cautiousness and  varying from  = 0 (o) to  ! 1
(). Eigenvalues for other designs are also shown: Ma (), Mk
(), and Ms (+). . . . . . . . . . . . . . . . . . . . . . . . . .

73

2.25 Time response for the unconstrained system with a pole placement
controller (solid) and with a PID controller (dashed). . . . . . . .

76

2.26 Time response for the constrained system with pole placement controller for different choices of the anti-windup gain matrix. The
unconstrained response (dashed), the constrained response (solid),
and the input (dotted) are shown for anti-windup gains (a) Mk , (b)
Ms , (c) Mc , and (d) no AWBT. . . . . . . . . . . . . . . . . . . .

77

2.27 Time response for constrained predictive controller (solid) and


AWBT compensated pole placement controller (dashed), both with
magnitude and rate saturation in the actuator. . . . . . . . . . . .

80

2.28 Time response for output constrained predictive controller (solid)


exposed to reference changes (dashed). The output is constrained
such that no overshoots exists. . . . . . . . . . . . . . . . . . . .

81

3.1

General robotic control system. . . . . . . . . . . . . . . . . . . .

85

3.2

The path velocity controller (PVC). The feedback from the controller to the PVC module modifies the reference update rate upon
saturating actuators. . . . . . . . . . . . . . . . . . . . . . . . . .

91

Traditional off-line trajectory generation and non-sensorbased execution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

92

Overview of general generation/execution scheme for sensor- and


event-based trajectory control. . . . . . . . . . . . . . . . . . . .

93

General overview of sensor- and event-based trajectory control on


mobile robots. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

97

3.3
3.4
3.5

4.1

The mobile robot. . . . . . . . . . . . . . . . . . . . . . . . . . . 101

xvi

List of Figures
4.2

One realization of the scaling correction factor estimation. Initially,


a 2% relative estimation error exists. After the experiment, the error
is 0:004%. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

4.3

Scaling correction factor estimation with 100 Monte Carlo estimations using a Simulink model and an up to a 2 percent initial error
on the physical parameters of the odometry model. (a) The top plot
shows the average distance estimation error while the bottom plot
shows the average relative scaling estimation error Sa (k ). (b) Initial (cross) and final (square) relative scaling estimation errors for
the 100 simulations. In average the scaling correction factor is improved by a factor of 54. . . . . . . . . . . . . . . . . . . . . . . 111

4.4

Trajectory for mobile robot for calibrating systematic errors. The


forward velocity is kept under 0:2 m/s and the total time to complete the course in both directions is about 50 seconds. An absolute
measurement is provided every one second. . . . . . . . . . . . . 112

4.5

One realization of systematic odometry factors estimation. The


measurements are noiseless. The relative odometry correction estimation errors r (k ) and b (k ) are shown. . . . . . . . . . . . . . . 113

4.6

Odometry correction factors estimation with 100 Monte Carlo estimations using a Simulink model and an up to a 2 percent initial
error on the physical parameters of the odometry model. The plot
shows the average relative scaling estimation error Sr (k ) and Sb (k ). 114

4.7

Calibration with three different initializations. Note different timescales for top and bottom plot. . . . . . . . . . . . . . . . . . . . 115

4.8

Floor plan for an illustrative building. . . . . . . . . . . . . . . . 117

4.9

The path following problem with a path consisting of two straight


intersected lines. . . . . . . . . . . . . . . . . . . . . . . . . . . 119

4.10 The velocity constraints. . . . . . . . . . . . . . . . . . . . . . . 121


4.11 Definition of path following problem for nonlinear approach. . . . 125
4.12 Straight path following with (solid) and without (dashed) nonlinear
scheduling for an initial distance error of 1m. . . . . . . . . . . . 129
4.13 Straight path with velocity constraints and scaling. . . . . . . . . 133
4.14 Path following for different initial positions. . . . . . . . . . . . . 134

List of Figures

xvii

4.15 Scaled velocities. Top plot: forward (solid) and angular (dashed)
velocity. Bottom plot: wheel velocities vl (solid) and vr (dashed).
Constraints are shown with dotted lines. . . . . . . . . . . . . . . 134
4.16 Path following for different turns.

. . . . . . . . . . . . . . . . . 135

4.17 Different values of and . Small (solid), medium (dashed), and


large (dash-dotted) values. . . . . . . . . . . . . . . . . . . . . . 136
4.18 Results from a path following experiment with the test vehicle.
Reference path (dotted), measured path (solid). . . . . . . . . . . 137
4.19 The actual measured velocities (solid) compared to the desired velocity references (dashed). . . . . . . . . . . . . . . . . . . . . . 137
4.20 Posture stabilization with exponential robust controller. . . . . . . 141
4.21 Noisy y2 -measurements (), x1 (dashed), x2 (dotted), and x3 (solid).142
4.22 (a) Convergence of the estimation error for time-varying s (solid)
and constant s (dashed). (b) Steady-state for time-varying s
(solid), constant s (dashed), and measurement noise (dotted). . . 147

List of Tables

2.1

Stop integration at saturation depending on saturation violation and


tracking error. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

16

2.2

Resulting R compensators for different AWBT schemes. . . . . .

30

2.3

Comparison of AWBT schemes. . . . . . . . . . . . . . . . . . .

36

2.4

Process models obtained from the general linear structure. . . . .

41

2.5

Properties of discussed constrained control strategies. . . . . . . .

65

2.6

Normed errors between constrained and unconstrained response


exposed to reference step, impulse disturbance, and load disturbance. 72

2.7

Normed errors between constrained and unconstrained response to


reference steps. . . . . . . . . . . . . . . . . . . . . . . . . . . .

74

Normed errors between constrained and unconstrained response to


impulse disturbances on x2 . . . . . . . . . . . . . . . . . . . . . .

75

Normed errors between constrained and unconstrained response


with pole placement controller exposed to reference step, impulse
disturbance, and load disturbance. . . . . . . . . . . . . . . . . .

78

3.3

Motion control tasks in mobile robotics. . . . . . . . . . . . . . .

96

4.1

Condition numbers for the observability matrix for different trajectories. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

2.8
2.9

xx

List of Tables
4.2
4.3

Comparison of two different calibrations. . . . . . . . . . . . . . 116


Quantification Rof noisy y2 measurements influence on system
1
states. (n) = 0 xn 1 e x dx. . . . . . . . . . . . . . . . . . . . 144

Nomenclature

This following lists contain the most important symbols, abbreviations, and terminology used in the text.

Symbols
t
k
q 1
T
tk
u
u^
y
x

deg()
dim()
diag(v )

Continuous time
Discrete time
Backward shift operator or delay operator: q

f (k) = f (k 1)

Sampling period or sampling time


Sampling instant: tk

=kT

Controller output
Process input, possibly constrained
Process output
Process state vector
Differencing operator:  = 1

Degree of polynomial

Dimension of matrix (rows  columns)

Diagonal matrix with diagonal elements equal to the elements of v

xxii

Nomenclature

(v)i
()T
(^)
(~)

Efg

@ ()
Rn

F (m; R)

N (m; R)

!n ; !d ; 
1m (x)
IQ

kvk

ith element of the vector v


Transpose of matrix
Estimated variable (Note u
^ is the constrained process input)
Estimation error variable
Expectation operator
Partial derivative operator
Real space of dimension n
Set of integers: : : :

; 2; 1; 0; 1; 2; : : :
Probability distribution, mean value m and covariance R
Gaussian distribution, mean value m and covariance R
Undamped and damped natural frequency and the damping ratio
A vector [x; x; : : : ; x]T of length m
A block diagonal matrix with
diagonal: IQ = diag(1m (Q))
Euclidean or L2 norm

m instances of the matrix Q in the

(v)21 + (v)22 + : : : + (v)2n

Abbreviations
AGV

Autonomous Guided Vehicle

ARIMAX

Auto-Regressive Integrated Moving-Average eXogenous

ARMAX

Auto-Regressive Moving-Average eXogenous

ARX

Auto-Regressive eXogenous

AW

Anti-Windup

AWBT

Anti-Windup & Bumpless Transfer

BT

Bumpless Transfer

CPC

Constrained Predictive Control

DC

Direct Current

DoF

Degree-of-Freedom

EKF

Extended Kalman Filter

GCT

Generalized Conditioning Technique

GMV

Generalized Minimum Variance

Nomenclature

xxiii

GPC

Generalized Predictive Control

GPS

Global Positioning System

LQG

Linear Quadratic Gaussian

LTR

Loop Transfer Recovery

MIMO

Multi-Input Multi-Output

MPC

Model Predictive Control

MV

Minimum Variance

PD

Proportional-Derivative

PI

Proportional-Integral

PID

Proportional-Integral-Derivative

PVC

Path Velocity Controller

QP

Quadratic Programming

RHC

Receding Horizon Control

RLS

Recursive Least Squares

RST

Controller structure: R(q

SCPC

Suboptimal Constrained Predictive Control

SISO

Single-Input Single-Output

UPC

Unified Predictive Control

)u(k) = T (q 1 )r(k) S (q 1 )y(k)

Terminology
In terms of time responses for dynamic systems the following cases are considered:
Unconstrained

No constraints exist in the system this is the ideal situation, the nominal controller is used

Constrained
uncompensated

The constraints (actuator, states, output) are now present


but the nominal uncompensated controller as in the unconstrained case is still used. The design and implementation
of the controller have not considered the constraints

Constrained
compensated

The constraints are present and some sort of compensation


for the constraints are present in the controller

Chapter

Introduction

1.1 Motivation and Background


Constraints are present in all control systems and can have damaging effects on the
system performance unless accounted for in the controller design process. Most
often constraints are identified as actuator magnitude and rate saturations, or output and state variable constraints. This thesis considers the above-mentioned constraints but also enlarges the perception to include issues such as nonholonomic
constraints and limited/constrained sensor packages and readings.
In some cases constraints are handled in a static way by over-designing the system
components such that during normal operation saturation or other limitations are
unlikely to be activated. This, however, is from a practical point of view highly
inefficient and will unnecessarily increase the cost of the overall system and is not
a recommendable approach. For example, by choosing a larger more powerful actuator than needed saturation is avoided. However, if at some point the production
speed or the load disturbances are increased, constraints may again be of concern
and the over-design has failed. Moreover, applications where the production speed
is limited by the controller performance (e.g. a welding robot) will look for time
optimal solutions and hence exploration of the full range of the systems actuators.

Chapter 1 / Introduction

Therefore, controllers fitted to operate securely in the presence of constraints are


highly relevant.
Actuator saturation is among the most common and significant nonlinearity in a
control system. In the literature a number of examples are given where neglecting
the saturation has led to crucial difficulties and endangered the overall stability of
the system. One application area is flight control of high agility aircrafts. Dornheim
(1992) and Murray (1999) refer to an YF-22 aircraft crash in April 1992. The incident was blamed on pilot-induced oscillations partly caused by rate saturation of
control surfaces inducing time-delay effects in the control loop. A similar example
is a Gripen JAS 39 aircraft crash in August 1993 (Murray, 1999). Again saturations
played a critical role.
Actuator saturation has also been blamed as one of several unfortunate mishaps
leading to the 1986 Chernobyl nuclear power plant disaster where unit 4 melted
down with dreadful consequences (Stein, 1989; Kothare, 1997). It has been reported that rate limitations aggravated an already hazardous situation (Stein, 1989).
When the nuclear process started accelerating during a test demonstration, the automatic regulating system began pushing the control rods into the core in order to
slow down the process. But the speed of the control rod movement was limited
(and mainly entered the core from the top of the core concentrating the reactivity
in the bottom) and the system was unable to catch the runaway reaction. The rest
is dreadful history.
As an example of the instability problems caused by actuator saturation consider
Figure 1.1 where the step response for a first-order open-loop unstable PI controlled system is shown with and without actuator constraints. An increase in the
performance of the actuator could save the system just as a larger reference change
once again could jeopardize the stability of the system. The point is, that a control
design without consideration of the constraints acting on the system is prone to
produce a poor performance or even cause instability.

1.1.1 Constraints in Control Systems


Traditionally, when speaking of constrained control systems, one refers to actuator, state or output constraints. In this thesis, this conceptual understanding of constraints is further extended to include limitations and constraints in sensor packages, model-knowledge, mobility, etc. Figure 1.2 shows a fairly general control
systems and indicates where some conflicts may arise.

1.1 / Motivation and Background

2
Output

Output

4
Input

Input

Time (secs)

5
Time (secs)

(a) Constrained

(b) Unconstrained

Figure 1.1. Input and output response to a reference step change for a PI controlled unstable
system with actuator saturation (left) and without saturation (right).

controller selection criteria

magnitude and rate limits

disturbances
work space constraints
model errors
safety limits

reference

output

Controller(s)

measurement

Actuator

Plant

Data
processing

Sensors

noise
miscalibration
time delays

Figure 1.2. General control system with various constraints, limitations, and disturbances.

Chapter 1 / Introduction

In the following, some of the issues in the figure are discussed in details.
Actuators
Control systems rely on actuators to manipulate the state or output of the controlled
process such that performance requirements are met. In all physical systems actuators are subject to saturation as they can only deliver a certain amount of force
(or equivalent). Most frequently confronted with are level or magnitude constraints
where the actuator strictly can apply a value between an upper and a lower limit.
Less frequently accounted for are rate constraints where the change in the actuator
output is limited.
Two examples of constrained actuators are a DC motor and a valve. For control of
rotary motion, a common actuator is the DC motor. Such a device is constrained
in input voltage and current which without load is equivalent to a velocity and
acceleration. Another common device is the valve. Here the level limits are inherently given as fully closed or fully opened. The rate constraints are governed by
the mechanism (e.g. a motor) driving the valve.
Figure 1.3 shows a typical saturation characteristic for an input level or magnitude
saturated actuator.

umax

umin

u^

Figure 1.3. Typical saturation characteristic.

Mathematically, the saturation function sat(u) is for u 2 R defined as


sat(u) =
and for u 2 Rn as

8
>
<umax
>
:

u
umin

u  umax
umin < u < umax ;
u  umin

(1.1)

1.1 / Motivation and Background

5
0

sat(u)

sat(u1 )

C
B
B sat(u2 ) C
C
=B
B . C:
B .. C
A
@

(1.2)

sat(un )

The values umin and umax are chosen to correspond to actual actuator limits either
by measuring the actuator output or simply by estimation. Input rate saturations can
in a similar way be modelled by applying the differencing operator  = 1 q 1
to the saturation function:
sat(u) =
where umax and
respectively.

umin

8
>
<umax

u
umin

>
:

u  umax
umin < u < umax ;
u  umin

(1.3)

are the upper and lower limits of the rate constraints

Very few general theoretic results exist on stability for actuator constrained systems, the following being a well-known exception:
For a linear systems subject to actuator saturations, a globally
asymptotically stabilizing feedback only exists if and only if the openloop system itself is stable (Sussmann et al., 1994).
In other cases, restrictions on the initial conditions and outside coming disturbances
(reference changes, noise) can ensure closed-loop stability for the system.
Constraints can be classified as either hard or soft. Hard constraints are characterized by no violation of the limits is permissible or feasible at any time during
operation whereas soft constraints temporarily can be allowed to violate the specified limits.
Sensor package
Calculations of controller outputs are based on sensors providing information about
the state of the process. Noisy, slow, unreliable (but often inexpensive) sensors
limit the performance and robustness of the control task. Meticulous filtering and
possibly fusion of data from more sources can improve the information but at times

Chapter 1 / Introduction

the performance of a controller is constrained by the sensor system. For example, in


mobile robotics positional information, which is vital to solving a variety of motion
tasks, is difficult to obtain with one sensor alone and no combination of sensors
have so far proved to be sufficiently robust and versatile to generic applications.
Mobility/Steerability
The actual mechanical construction of a system is subject to cause constraints. In
robotics the joints of a manipulator are likely to have limited flexibility. For a mobile robot, the wheel configuration may restrict the degrees of controlled freedoms
to two: typically the robot can turn on the spot or move forward but it cannot directly move side-wards. The vehicle is said to be underactuated or nonholonomic1 .
State and Work space, safety
Some applications must take into account constraints in the state of the system
and in the working environment, that is state and output constraints. In robotics,
manipulators and mobile robots, traveling from one point to another, frequently
encounter obstacles which impose motion constraints.
Some constraints are self-imposed by the designer or user of the system. For safety
reasons it may be advantageous to limit states such as temperatures, pressures, velocities, turning rates, and voltages. Furthermore, in order to avoid unnecessary
mechanical wear and tear, constraints can be imposed on movements causing sudden jerks and yanks.
Model-knowledge and calibration
Many control strategies are model-based2 for which reason the successability is
limited by the accuracy of the models. Error sources are inadequate calibration, inaccessible physical parameters, unmodelled dynamics or linearization of nonlinear
dynamics.
1

The robot is restricted in the mobility, i.e. the robot can move in 3 dimensions (position and
orientation) but can only directly control two (heading and angular speed).
2
Model-based design implies that a mathematical dynamical model of the process is used in the
design of a controller. An example of this is in predictive control where the process model predicts
the systems behavior and enables the controller to select a control signal based on what the future
will bring.

1.2 / Research in Constrained Control Systems

1.2 Research in Constrained Control Systems


The term constrained control system usually refers to control systems with input,
output or state constraints. The research within this field primarily focuses on otherwise linear plants. Two well-known techniques have received much attention:




Anti-Windup and Bumpless Transfer (AWBT).


Constrained Predictive Control (CPC).

An additional category is nonlinear stabilization of constrained linear systems


which includes a variety of rescaling and gain scheduling approaches along with a
number of specific solutions to specific problems.
AWBT control is a popular choice due to its heuristic uncomplicated approach to
actuator saturation. It is computational cheap and can be added to any existing
unconstrained linear controller. However, the technique lacks optimality, stability
proofs, and can only deal with input magnitude saturation.
An alternative to AWBT is predictive control which is the only systematic approach to handling constraints. CPC can handle magnitude and rate constraints
on the input, output, or state variables. The main disadvantage of the CPC is the
involved on-line optimization which much be completed between two sampling
instances. The application of the optimal CPC is hence restricted to relatively slow
sampled processes. For this reason CPC is a popular choice in the chemical process industry, although suboptimal algorithms have found applicability in faster
electro-mechanical systems.
A growing interest in development of autonomous robotic systems has created a
demand for intelligent handling of constraints. Often, constraints in robots (saturation, obstacles, etc.) appear when planning and executing a motion in space. A
robot has a high degree of interaction with a, in general, undeterministic dynamical work environment. This suggests to use a sensor-based closed-loop trajectory
planning and control where the system can utilize sensor information as it become
available to generate motion references in the presence of changing and uncertain
events.
A significant question is where to handle the constraints in robotic systems. For
example, for a robot motion along a predefined path is common and in case of
a level actuator constraint, two different ways of handling it exist. The controller
may choose to follow the path while decreasing the velocity along the trajectory or

Chapter 1 / Introduction

it may choose to keep the velocity but deviate from the nominal path. Either way
has advantages but the selected behavior should be a design option. This suggests
to handle the constraints in the reference or trajectory generator instead of in the
low-level motion controllers. This way the system has direct control of how the
saturation compensation affects the resulting trajectory.
How to design and implement such a closed-loop event-based trajectory scheme
which can deal with both constraints and unexpected events is a difficult question.
Challenges include development of algorithms and stability investigations of the
required feedback loops.

1.3 Contributions
The main contributions of this thesis are as follows:







Anti-windup and bumpless transfer compensation is addressed. The diversity


of suggested approaches are compared and the selection of the anti-windup
parameter is discussed. It is argued that, for some systems, placing the poles
of the anti-windup compensated constrained controller close to the slowest
poles of the unconstrained closed-loop system yields good performance to
reference changes and load disturbances.
A predictive controller using a state space approach is deducted. Modelling
and handling of constraints within this framework is addressed.
For a rescaling approach, the explicit formulation of the actuator bounds
is difficult. It is shown how these bounds adaptively may be estimated and
hence ease the application of the approach.
A gain scheduling approach for handling level saturations on a chain of integrators with state feedback is presented.
The architecture of a robot control system is discussed with focus on various
constraints with respect to how to handle the constraints and where in the
hierarchical control system to do it.
An case study on a mobile is described. The results are three-fold:
The determination of systematic kinematic errors is addressed and an
auto-calibration procedure using on-board filters and sensors is presented.

1.4 / Objectives and Organization of the Thesis

It is shown how a receding horizon approach can be used to path control


a mobile robot making it capable of anticipating sharp turns etc. The
velocity constraints are handled with scaling which is computational
inexpensive.
It is shown how algorithms for posture stabilization of a nonholonomic constrained mobile robot are particularly sensitive to noisy measurements. A nonlinear time-varying filter is presented that reduces the
noise impact.

1.4 Objectives and Organization of the Thesis


The thesis aims to create an understanding of problem caused by constraints in
control systems and to provide necessary insight and knowledge required to analyze and design controllers that can handle the specified constraints in a sufficiently
satisfactory way. What the term satisfactory way covers depends on the constraints
and the control system. In some cases, the constrained closed-loop system is possibly unstable and ensuring the stability is of greatest concern while in other cases,
the constraints are not likely to cause stability problems and performance of system
during active constraints is the issue. The emphasis throughout the thesis will be
put on




The design of constrained controllers. This includes add-on designs such as


anti-windup compensation, embedded designs such as constrained predictive control, and nonlinear time-varying approaches such as rescaling and
gain scheduling. The focus is on performance of the constrained closed-loop
systems.
The architecture and building blocks of a robotic control system are presented in the context of how and where to handle various constraints in the
system.
A case study of a mobile robot. The robot is nonholonomic constrained
which call for special attention regarding the control system. Issues such
as localization, posture stabilization, and path following are treated.

The thesis is organized with three main chapters concerning the above mentioned
issues. Details are as follows:

10

Chapter 1 / Introduction

Chapter 2. Controller Design for Constrained Systems


Different approaches to designing controllers for systems with constraints
are presented. This include methods such as anti-windup and bumpless transfer, constrained predictive control, and nonlinear approaches such as rescaling and gain scheduling. Focus are on level and rate saturated actuators. A
comparative study of anti-windup and predictive control on a cascaded double tank system is included.
Chapter 3. Robot Controller Architecture
A robot control architecture is described with emphasis on the different layers, the signal and information paths, and elements, i.e. controllers, sensors,
trajectory generators, and higher level intelligence modules. The chapter investigates a sensor-based closed-loop approach to trajectory control and details are given on the relevant feedback loops for dealing with constraints.
Chapter 4. Case Study: a Mobile Robot
As an example of a complex constrained control system, a mobile robot control system is described and examined. The chapter investigates problems
concerning limited sensor packages, constrained path following control, and
noisy measurements used to posture stabilize a nonholonomic mobile robot.
Chapter 5. Conclusions
Conclusions and future work recommendations are given.

Chapter

Controller Design for Constrained


Systems

2.1 Control Design Strategies


This chapter discusses general controller design in the presence of constraints. A
rich theory exists for designing linear controllers (PI, PID, LQG, pole placement,
robust) and nonlinear controllers (feedback linearization, sliding control, gain
scheduling, backstepping) for linear, linearized or nonlinear systems. Nonetheless,
these popular techniques have no direct potentiality of dealing with nonlinearities
in terms of input, output or state constraints. Although constraints sometimes may
be neglected, in general, they induce undesired operational difficulties, e.g. loss of
stability, unless accounted for properly.
In studying constrained control systems, three fundamentally different control
strategies exist:
1. Retro-fitted design to an existing linear controller that in the desired way
compensates for the constraints under consideration. This strategy include
anti-windup compensation.

12

Chapter 2 / Controller Design for Constrained Systems


2. Predictive Control, which is a strategy capable of handling constraints in a
systematic way as an embedded part of the controller.
3. Nonlinear Control, where specific nonlinear solutions solve or compensate
the constrained control problem, typically only for a class of otherwise linear
systems.

Predictive control and nonlinear control are included in the group of strategies
classified as embedded constraints handling. This reflects that the design is not
just an extra compensation loop added to an original controller. However, it does
not exclude the constrained controller from being a linear controller during normal
unconstrained operation or to originate from a linear controller.
Constraints exist in all control systems with actuator saturation being by far the
most common one. This explains why most approaches described in the literature
are concerned with actuator saturation.

2.2 Anti-windup and Bumpless Transfer


It has been recognized for many years that actuator saturations, which are present in
most control systems, can cause undesirable effects such as excessive overshoots,
long settling times, and in some cases instability. These phenomena were initially
observed for systems controlled by conventional PID control but will be present in
any dynamical controller with relatively slow or unstable states as pointed out in
Doyle et al. (1987).
The problem arises when an actuator saturates and results in an mismatch between
the controller output and the system input. This means that the feedback loop is
broken: Changes in the controller output (outside the linear range of the actuator)
does not effect the system; the loop gain is zero. The integrator in the PID controller, for example, is an unstable state, and saturations can cause the integrator
to drift to undesirable values as the integrator will continue to integrate a tracking
error and produce even larger control signals. In other words, we have integrator windup which can lock the system in saturation. In process control, integrator
windup is often referred to as reset windup since integral control is called reset
control1 . Control techniques designed to reduce the effects from integrator windup
are known as anti-windup (AW).
1

Historical note (Franklin et al., 1991): In control applications before integral control became
standard, an operator would remove a potential steady-state output error by resetting the reference

2.2 / Anti-windup and Bumpless Transfer

13

Another control problem is often exposed in parallel with windup, at least from a
theoretical point of view. When substituting one controller with another in a control system, there may, at the moment of transfer, exist a mismatch between the
output of the two controllers causing bumps on the output of the system. This is
equivalent to the case with a saturating actuator where we had the same mismatch.
The problem of retaining a smooth control signal and system output when switching controllers is known as bumpless transfer (BT). Controller substitution is, from
a practical point of view, interesting for a number of reasons (Graebe and Ahlen,
1996):
1. A linearized model of a plant is only valid in a neighborhood of an operating point and to ensure a wider operating envelope, a number of controllers
applying to different operating points may have been designed and operation
requires transfer from one controller to another.
2. In industrial processes it is convenient to be able to switch between manual
and automatic control for example during maintenance or startup.
3. During evaluation or tuning of a filter or controller it is convenient to be able
to switch between a new test controller and a well-known stable controller. It
is especially advantageous to have the reliable well-known stable controller
ready to catch the plant if the test controller fails.
Most often the two areas are referred to as anti-windup and bumpless transfer
(AWBT) all together and consider the problem when a nonlinearity operates on
the controller output, that be a saturation element, relay, dead-zone, hysteresis, or
a controller substitution due to switching or logic. Figure 2.1 shows a common
^ is the
system notation for AWBT schemes where u is the controller output and u
actual system input. In short, the AWBT problem can be expressed as keeping the
^ under all operational conditions.
controller output u close to the system input u
As a contrast to predictive control, which is a control strategy capable of handling actuator saturation as an embedded part of the controller (see section 2.3), an
AWBT scheme is a modification to an already designed possibly nonlinear controller and the AWBT modification will only interfere when saturation or substitution occur. During normal operation, the closed-loop performance is equal to the
to a new value reducing the error. For example, if the reference is 1 and the output reads 0.9, the
operator would reset the reference to, say, 1.1. With integral control, the integrator automatically
does the resetting.

14

Chapter 2 / Controller Design for Constrained Systems

r
Controller

Nonlinearity

u^

Plant

Figure 2.1. Common system notation for AWBT schemes. The unconstrained controller
^. The conoutput is denoted u while the (possibly) constrained process input is denoted u
troller may be 1 or 2 DoF.

performance of the original design. Thus, the design of an AWBT controller is a


two step procedure:
1. Design a (non)linear2 controller using standard design methods. Consider
only the linear range of the actuator.
2. Fit an AWBT scheme upon the controller. This will (should) not alter the
existing controller during normal operation.
During the last 40 years or so a substantial number of papers have launched a wide
range of different approaches to AWBT. Initially, the propositions were, as the nature of the problem itself, all based on practical problems experienced during operation of a control systems and would therefore relate only to specific controllers
and systems. In the late eighties general methods with respect to the controller
and the system evolved. These include the classical anti-windup (Fertik and Ross,
om and Wittenmark, 1990), the Hanus Conditioning Technique (Hanus,
1967; Astr
1980; Hanus et al., 1987) and refinements (Walgama et al., 1992; Hanus and Peng,
om and Rundqwist, 1989; Walgama
1992), and the observer-based method (Astr
and Sternby, 1990). Recently, unifying frameworks embracing all existing linear
time-invariant AWBT schemes have been put forward (Kothare et al., 1994; Edwards and Postlethwaite, 1996, 1998; Peng et al., 1998). In Graebe and Ahlen
(1994, 1996) bumpless transfer is investigated with comments on the practical differences between anti-windup and bumpless transfer.
As already indicated, a large number of ways exists to keep the system input u and
^ close. The following lists some basic evident guidelines for
the controller output u
dealing with actuator saturation:
2

Some AWBT schemes are specific to certain controller structures such as a PID controller and
will not work for a nonlinear controller.

2.2 / Anti-windup and Bumpless Transfer

15

1. Avoid conditionally stable3 control systems (Middleton, 1996). An openloop strictly unstable system can only be conditionally stabilized.
2. Avoid applying unrealistic reference signals or modify/filter the reference
when saturation is likely. Especially steps on the reference should be avoided.
3. Utilize AWBT feedback when implementing the controller.
Of course, the designer does usually not choose the system so guideline number
one should be regarded more as a caution. Saturation compensation is particularly
important in this case.
Specifically regarding AWBT design, the following three criteria should be held in
mind during the design process:





The closed-loop system must be stable, also under saturation. This, of course,
assumes that the system is globally stabilizable with the available actuator range which is only guaranteed for open-loop stable systems (Sussmann
et al., 1994).
During normal operation (no actuator nonlinearities or controller substitutions) the closed-loop performance should equal the performance of the original design.
The closed-loop system should in a well-behaved manner degrade from the
linear case when saturations or substitutions occur.

Stability considerations for saturated systems are somewhat complex and only few
authors have attempted to analyze the resulting closed-loop behavior. Some notable references are: Kothare and Morari (1997) give a review and unification of
various stability criteria such as the circle criterion, the small gain theorem, describing functions, and the passivity theorem; Edwards and Postlethwaite (1997) apply
om and Rundqwist
the small gain theorem to their general AWBT framework; Astr
(1989) make use of describing function analysis and the Nyquist stability criterion to determine stability for the observer-based approach; Niu and Tomizuka
(1998) are application oriented and are concerned with Lyapunov stability and robust tracking in the presence of actuator saturation; and finally, Kapoor et al. (1998)
present a novel synthesis procedure and consider stability.
3
A conditionally stable system is characterized as one in which a reduction in the loop gain may
cause instability.

16

Chapter 2 / Controller Design for Constrained Systems

2.2.1 Ad hoc methods


A number of more or less ad hoc methods exist and will briefly be described in the
following.
Stop integration at saturation
Intuitively, a solution to integrator-windup due to actuator saturations is to turn off
the integrator action as soon as the actuator saturates. This yields the following PID
algorithm (Walgama and Sternby, 1990):

u(k) = uP D (k) + uI (k)


(
u (k) + KI e(k)
uI (k + 1) = I
uI (k)

if no saturation
if saturation.

(2.1)

An obvious drawback of this method is that the control signal may lock in saturation indefinitely since the integrator is not being updated. A solution to this problem would be to stop the integration only when the control signal actually becomes
more saturated and to update the integrator if the control signal de-saturates.
This, of course depends on the sign of the tracking error and which actuator limit
is being violated, see Table 2.1.
Saturation

u  umin
u  umin
u  umax
u  umax

Tracking error

Integral action

positive

update

negative

stop

positive

stop

negative

update

Table 2.1. Stop integration at saturation depending on saturation violation and tracking
error.

2.2 / Anti-windup and Bumpless Transfer

17

Incremental Algorithms
Usually, a controller calculates an absolute value that is fed to the actuator and as
seen, this may cause windup. An alternative is to implement an incremental algorithm that only evaluates the increments in the control signal since previous sample.
Basically, the integrator is moved outside of the controller. This is, for example, inherent when using a stepper motor since it accepts incremental signals. Note, that
only the risk of windup due to the integrator is removed whereas any slow or unstable state remaining in the incremental controller may still cause problems. For a
PID controller, though, moving the integrator outside of the controller is probably
sufficient.
Conditional Integration
om and Rundqwist (1989) mention this method. The idea is to only apply inAstr
tegral action when the tracking error is within certain specified bounds. Intuitively,
the bounds can be set such that the actuator does not saturate given the instantaneous or predicted value of the system output. Basically, the limits on the actuator
are translated into limits on the tracking error. These limits form the proportional
band and the integration is then conditioned that the tracking error is within the
proportional band. A disadvantage of this method is the possibility of chattering
though the introduction of a hysteresis or similar can help reducing that.

2.2.2 Classical Anti-Windup


For PI and PID controllers, a classical approach is known as back-calculation,
tracking, or classical anti-windup. The method was first described by Fertik and
Ross (1967) and is mentioned in most introductory literature on the subject, e.g.
om and Wittenmark (1997) or Franklin et al. (1991).
Astr
To avoid windup, an extra feedback loop is added by feeding the difference be^ back to the integrator through an
tween the control output u and the plant input u
1
adjustable gain t . This will make the controller track the mode where the control
output equals the plant input. The control signal is recalculated to a new value and
will, during saturation, eventually track the saturation limit. The scheme is presented in Figure 2.2 for a PID controller. The gain 1t governs the rate at which the
integrator will reset.
A text book version of the PI controller (the D-term is omitted for simplicity) is

18

Chapter 2 / Controller Design for Constrained Systems

r
y
e

PD

K
i

+
+

1
s

Actuator

u^

+
1

t

et

Figure 2.2. A PID Controller structure with classical anti-windup.

1
u=K e+
i
e = r y;

Z t

e(t)dt

(2.2)
(2.3)

or in the Laplace domain

u(s) = K

1 + i s
:
i s

(2.4)

^ = sat(u) and the AW tracking loop we get


Adding saturation u


1
u=K e+
i
u^ = sat(u);

Z t


e(t) + i (^u u)dt
Kt

(2.5)
(2.6)

and in the Laplace domain, (2.5) yields

K
1
u(s) = Ke(s) + e(s) + (^u(s) u(s))
i s
t s
Kt 1 + i s
) u(s) =  1 +  s e(s) + 1 +1 s u^(s)
i
t
t
t s
1 + i s
1
=
K
+
u^(s):
1 + t s
i s
1 + t s

(2.7)
(2.8)
(2.9)

This clearly shows how any integrator in the PI controller is canceled and replaced
with a first order lag with time constant t . This also shows that the method only

2.2 / Anti-windup and Bumpless Transfer

19

applies to integral action since the transfer function from e to u remains unstable if
the controller has any unstable poles besides the integrator.
No theoretic guidelines exist for choosing the gain t . The designer must exclusively rely on simulations and on-line tuning based on the closed-loop nonlinear
system response. A simple choice would be to select the AW gain t = i . This
simplifies (2.8) to

u(s) = Ke(s) +

1
u^(s):
1 + i s

(2.10)

This alternative AWBT implementation is shown in Figure 2.3 for a scalar PI controller.

+
+

Actuator

u^

Plant

1
1+i s

Figure 2.3. PI controller with simple AWBT compensation.

For a PID controller of the form u(s)


position can be carried out:

= P (s)I (s)D(s)e(s) the following decom-

1 + i s 1 + d s
e(s)
i s 1 + d s
!
1 d s + 1 + (1 ) di
) u(s) = K  s +
e(s);
1 + d s
i

u(s) = K

(2.11)
(2.12)

which make the controller applicable to the classical AWBT compensator.


The previous mentioned PI and PID controllers are 1 DoF and are not suited for
actual implementation. A more flexible 2 DoF PID-algorithm is given as

20

Chapter 2 / Controller Design for Constrained Systems




u(s) = K r(s) y(s) +

1
(r(s) y(s))
si

where i ; d ; t ; and are tunable parameters.

d s
y(s)
+ d s
1
+ (^u(s) u(s));
t s

(2.13)

An example of a discretization of this controller with the AWBT scheme and sampling time T is:

P (k) = K ( r(k) y(k))


d
Kd
D(k 1)
(1 q 1 )y(k)
D(k) =
d + T
d + T
T
KT
e(k) + (^u(k) u(k))
I (k + 1) = I (k) +
i
t
u(k) = P (k) + I (k) + D(k)
u^(k) = sat(u(k));

(2.14)
(2.15)
(2.16)
(2.17)
(2.18)

which attractively implements the proportional, derivative and integral terms separately.

2.2.3 Observer-based Anti-Windup


om and Rundqwist
A number of approaches to AWBT uses state space theory. Astr
(1989) describes an observer-based approach to anti-windup. Figure 2.4 shows a
traditional state space controller consisting of a full-order state observer and a state
^ is different from the controller output u (upon
feedback. When the plant input u
saturation or transfer) the windup problem is easily understood: the estimated states
given by the observer will not reflect the control signal fed to the plant and therefore
will not reflect the actual system states. This may lead to inconsistency and an
incorrect control signal resulting in windup. Given such a controller structure it
^ to
is simple to avoid the problem by feeding back the saturated control signal u
the observer rather than the control output u. The estimated states will then be
correct, the consistency between observer and system states is intact, and windup
is avoided.
The result is given by:

x^_ = Ax^ + B u^ + K (y C x^)


u = Lx^
u^ = sat(u);

(2.19)

2.2 / Anti-windup and Bumpless Transfer

Observer

x^

21

Actuator

u^

Plant

Controller

Figure 2.4. Traditional combination of an observer and a state feedback. In presence of


saturation, an inconsistency exists between the observer and system states.

where x
^ is the estimated state vector, (A; B; C ) describes the system dynamics, K
is the observer gain and L is the state feedback gain.
Not all state space controllers are given as a combination of an observer and a state
om and Rundqwist (1989) the controller in (2.19)
feedback. As described in Astr
has high frequency roll-off meaning the transfer function of the controller is strictly
proper4 . The technique can be extended to a general state space controller, with
constant high frequency gain, given by the following structure

x_ c = F xc + Gr r Gy y
u = Hxc + Jr r Jy y;

(2.20)
(2.21)

where xc is the state vector for the controller and F; Gr ; Gy ; H; Jr , and Jy represent the dynamics of the controller.
To avoid windup, the controller is changed by feeding back the difference between
u and u^ to the controller. This results in the following controller

x_c = F xc + Gr r Gy y + M (^u u)
= (F MH )xc + (Gr MJr )r (Gy
u = Hxc + Jr r Jy y
u^ = sat(u);

(2.22)

MJy )y + M u^

(2.23)
(2.24)
(2.25)

^ = u the controller
where M is an appropriately dimensioned gain matrix. When u
reverts to the original controller formulation. Given a gain matrix M chosen such
According to Doyle et al. (1992) a transfer function G is said to be strictly proper if G(i1) =
0 (the degree of the denominator > the degree of the numerator), proper if G(i1) is finite, and
4

biproper if G and G 1 are both proper (degree of denominator = degree of numerator.

22

Chapter 2 / Controller Design for Constrained Systems

that (F MH ) has stable eigenvalues (is Hurwitz), windup or bumping transfer


is to some degree avoided. Walgama and Sternby (1990) has exposed this observer
property inherent in a class of AWBT compensators which allow for a unified investigation of their performance. Figure 2.5 shows the scheme applied to a 1 DoF
state space controller (unity feedback).

+ x_ c
+ +

+
xc

+
+

Actuator

u
^

Figure 2.5. 1 DoF state space controller with observer-based AWBT compensation.

For an input-output RST controller structure, a similar result exists (Walgama and
Sternby, 1990):

Aaw (q 1 )u(k) = T (q 1 )r(k) S (q 1 )y(k) + [Aaw (q 1 ) R(q 1 )]^u(k)


(2.26)

u^(k) = sat[u(k)];

(2.27)

where Aaw is the anti-windup observer polynomial.


Selection of the Anti-windup observer gain M
om and Rundqwist (1989) no design guidelines are given on how to select
In Astr
the gain M except for the fact that F MH should be stable. A more direct design
procedure is recently proposed in Kapoor et al. (1998). The idea is to analyze the

2.2 / Anti-windup and Bumpless Transfer

23

mismatch between the constrained and unconstrained closed-loop system. Assuming unity feedback (e = r y ) the unconstrained closed-loop system with state
z u = [x; xc ]T is given by
"

"

A + BJC BH u
BJ
z_ u =
z +
r
GC
F
G
= Acl z u + Bcl r
h
i
uu = JC H z u + Jr = Kcl z u + Jr
h

yu = C 0 z u = Ccl z u ;

(2.28)
(2.29)
(2.30)
(2.31)

where (A; B; C ) and (F; G; H; J ) represent the dynamics of the system and the
controller, respectively. The constrained closed-loop system yields:
"

"

A + BJC BH
z_ =
z+
GC
F
= Acl z + Bcl r + Mcl (^u u)
u = Kcl z + Jr
y = Ccl z:

BJ
G

"

B
r+
M

(^u u)

(2.32)
(2.33)
(2.34)
(2.35)

Finally, we introduce the mismatch between the constrained and unconstrained


closed-loop system as

z~ = z

z u ; u~ = u uu ; y~ = y yu ;

(2.36)

and subtracting (2.29) from (2.33) gives the system

z~_ = Acl z~ + Mcl (^u u)


= (Acl Mcl Kcl )~z + Mcl (^u uu )
u~ = Kcl z~
y~ = Ccl z~:

(2.37)
(2.38)
(2.39)
(2.40)

Suppose Acl can be Schur decomposed (Acl = U U h1 ) and


i the Schur matrix 
has real diagonal matrix blocks. Given a matrix T = 0 I U 1 , where I is an
identity matrix with as many rows and columns as control states nc , Kapoor et al.
(1998) selects the gain as

M = T2 1T1 B;

(2.41)

24

Chapter 2 / Controller Design for Constrained Systems

where T2 is the last nc columns of T and T1 the remaining. This makes the closedloop system asymptotically stable. Note, that the choice of M is not necessarily
unique or optimal in terms of performance but only chosen from a stabilizing point
of view.

2.2.4 Conditioning Technique


Hanus Conditioning Technique
A scheme named the Conditioning Technique is widely mentioned in the literature
and was initially formulated in Hanus (1980) and further developed in Hanus et al.
(1987) as an extension to the classical anti-windup compensator.
The principal behind the method is to modify the reference or the set point r to
^.
the controller whenever the controller output u is different from the plant input u
The reference is modified such that if the new reference rr had been applied to
^). The conditioned
the controller, the actuator would not saturate (u would equal u
reference is referred to as the realizable reference. At the expense of modifying
the reference, the nonlinearity in the actuator is removed, so to speak, since the
modified reference will keep the actuator in the linear area.
The resulting controller is here presented using a state space approach adopted
from Hanus et al. (1987). The equivalent input-output approach with a RST controller structure is found in Ronnback (1993).
Now, consider the general state space controller structure in (2.20) and (2.21) with
^. When u^ 6= u the controller states xc will be inadequate and can
plant input u
decrease the quality of the control performance. In order to restore the states, a
^ = u:
new reference rr is calculated such that u

x_rc = F xrc + Gr rr Gy y
u^ = Hxrc + Jr rr Jy y:

(2.42)
(2.43)

The controller output is assumed to equal the plant input due to the modified reference. Hanus et al. (1987) assumes present realizability giving

u = Hxrc + Jr r Jy y;

(2.44)

when using the correct states resulting from applying the realizable reference. Subtracting (2.44) from (2.43) yields:

2.2 / Anti-windup and Bumpless Transfer

u^ u = Jr (rr

25

r );

(2.45)

and assuming Jr is nonsingular, the realizable reference can be found as:

rr = r + Jr 1 (^u u):

(2.46)

Combining (2.42), (2.44), and (2.46) gives the following controller:

x_rc = (F Gr Jr 1 H )xrc
u = Hxrc + Jr r Jy y
u^ = sat(u):

(Gy

Gr Jr 1 Jy )y + Gr Jr 1 u^

(2.47)
(2.48)
(2.49)

This is called the AWBT conditioned controller because the controller is conditioned to return to normal mode as soon as it can (Hanus et al. (1987)). Refer to
Figure 2.6 for a schematic view of the method.

rr

r +

C (s)

Actuator

u^

Plant

Jr 1
Figure 2.6. Hanus conditioning technique. C (s) represents the controller transfer function.

Comparing (2.47) with (2.23) shows that the Hanus Conditioning technique is a
special case of the observer-based approach with M = Gr Jr 1 .
Generalized Conditioning Technique
A number of drawbacks in the conditioning technique has been reported in Walgama et al. (1992) and Hanus and Peng (1992):

The matrix Jr has to be nonsingular, i.e. no time delay can exist in the controller from reference to output (Hanus and Peng, 1992; Walgama et al.,
1992).

26

Chapter 2 / Controller Design for Constrained Systems




The technique is inflexible (but simple) in terms of design as there are no


tuning parameters.
The technique can only handle one saturation level since the applied conditioned reference may saturate the system in the other direction of saturation.
This is referred to as inherent short-sightedness (Walgama et al., 1992).

The case with a controller with time delays (Jr singular) is handled in Hanus and
Peng (1992) where the conditioning technique is modified for a general time delay
structure.
Walgama et al. (1992) proposes two modifications to the technique to overcome the
three inherent problems of the technique. The first approach introduces cautiousness so that the change in the modified reference is made smoother and guarantees
a non-singular Jr . This is done by introducing a tuning parameter  as follows:
rr = r + (Jr + I ) 1 (^u u); 0    1;
(2.50)
which leads to the controller:

x_c = (F Gr (Jr + I ) 1 H )xc (Gy


+ Gr (Jr + I ) 1 u^
u = Hxc + Jr r Jy y
u^ = sat(u):

Gr (Jr + I ) 1 Jy )y
(2.51)
(2.52)
(2.53)

The parameter  indicates the degree of cautiousness where  = 0 results in the


original method and  ! 1 results in the rr = r and hence the effect from the
conditioning is completely removed. The method is referred to as the Conditioning
Technique with Cautiousness.
A second extension by Walgama et al. (1992) leads to the so-called Generalized
Conditioning Technique (GCT) which performs conditioning on a filtered reference
signal rf instead of directly on the reference r . Walgama et al. (1992) uses a RST
controller structure and modifies the T -polynomial, but here we shall adopt the
state space approach used in Kothare et al. (1994) as it leans itself towards the
original presentation of the conditioning technique.
Let the 2 DoF state space controller with the filtered reference be defined by

x_c = F xc Gy y
x_f = Ff xf + Gf r
u = Hxc + Hf xf + Jf r + Jy y;

(2.54)
(2.55)
(2.56)

2.2 / Anti-windup and Bumpless Transfer

27

where (Ff ; Gf ; Hf ; Jf ) describe the filter dynamics. The conditioning technique


is now applied and the realizable reference is found as

rr = r + Jf 1 (^u u):
Replacing
controller

(2.57)

in (2.55) with this expression for rr yields the following resulting

x_c = F xc Gy y
x_f = (Ff Gf Jf 1 Hf )xf Gf Jf 1 Hxc + Gf Jf 1 u^ Gf Jf 1 Jy y
u = Hxc + Hf xf + Jf r + Jy y
u^ = sat(u):

(2.58)
(2.59)
(2.60)
(2.61)

As pointed out by Walgama et al. (1992) no guidelines are available for choosing
the filter in terms of stability and closed-loop performance.

2.2.5 Generic Frameworks


Several authors have attempted to set up a general unifying framework for existing AWBT methods. The benefits from such a unification are many. A unification
would open up for establishing a general analysis and synthesis theory for AWBT
schemes with connection to issues such as how design parameters influence stability, robustness, and closed-loop performance. Moreover, the designer can compare
and contrast existing methods.
The first strides towards a generalization was taken in Walgama and Sternby (1990)
who identified an observer inherent in a number of existing schemes (classical,
observer-based, Hanus conditioning, and others). This allowed the authors to generalize the considered schemes as a special case of the observer-based approach.
Framework of Kothare, Campo, Morari and Nett
In Kothare et al. (1994) a more methodical framework was put forward which
unified all known linear time-invariant AWBT schemes. A short introduction will
be given here. The parameterization is in terms of two constant matrices 1 and
2 as opposed to the one parameter M of the observer-based approach. This adds
freedom to the design at the expense of clarity and usability. Consider first the
following state space notation

28

Chapter 2 / Controller Design for Constrained Systems


"

C (s) = H (sI

F) G + J =
1

"

C1 (s) C2 (s) =

F G
H J

(2.62)

F G1 G2
;
H J1 J2

(2.63)

to represent a 1 DoF and 2 DoF controllers, respectively. For a controller with dynamics described by (F; G; H; J ), Kothare et al. (1994) propose the AWBT conditioned controller given by

u = U (s)e + (I

V (s))^u;

(2.64)

where U (s) and V (s) are given by


h

"

V (s) U (s) =

K1 H
K2 H

K1 G K1 J
K2
K2 J

(2.65)

and

K1 = 1 (I 2 ) 1
K2 = (I + 2 ) 1:

(2.66)

See Kothare et al. (1994); Kothare (1997) for how to select the two design parameters 1 and 2 such that existing schemes become special cases of the framework
and refer to Kothare and Morari (1997, 1999) for applications of the framework.
The framework is fairly complex and how choices of 1 and 2 influence performance is unclear. However, as mentioned before, the framework has initiated
general stability considerations for AWBT systems.
Framework of Edwards and Postlethwaite
In Edwards and Postlethwaite (1996) another framework for AWBT schemes is
proposed. Figure 2.7 shows the framework for a controller C which for a 1 DoF
controller is described by

C (s) = H T (sI
for input r

F ) 1 G + J;

y and for a 2 DoF controller by

(2.67)

2.2 / Anti-windup and Bumpless Transfer

r
C

29

u^

R
AWBT

Figure 2.7. The framework of Edwards and Postlethwaite.

C (s) = C1 (s) C2 (s)


h

= H T (sI

F ) 1 G1 + J1 H T (sI

F ) 1 G2 + J2 ;

(2.68)

for input r and y .

^ and u indicates a nonlinearity, i.e. saturation or a controller subThe gab between u


stitution. The article states that all existing AWBT schemes can be unified in this
framework, including the previous described framework by Kothare et al. (1994).
The objective of the AWBT design is to choose an appropriate transfer function
R(s) likely to contain dynamics.
Applying a R(s) compensator to a controller
and u
^ to u is given explicitly as

C (s), the transfer function from e

u(s) = (I + R(s)) 1 C (s)e(s) + (I + R(s)) 1 R(s)^u(s):


Table 2.2 shows resulting

(2.69)

R compensators for some of the mentioned techniques.

The framework does not necessary represent the way the various techniques should
be implemented in practice since the choice of R may cause unstable pole/zero
cancellations which are not necessarily exact. However, the framework may be
valuable as a platform for comparison of methods and establishment of stability
results.

30

Chapter 2 / Controller Design for Constrained Systems

R compensator
1

Scheme
Classical anti-windup
Hanus conditioning technique
Conditioning technique with cautiousness
Observer-based anti-windup
Framework by Kothare et al. (1994)

F
H

t s

F
Gr Jr 1
H
0
1
Gr (Jr + I )


F
H

F
H

K1 K2 1
K2 I

Table 2.2. Resulting R compensators for different AWBT schemes.

2.2.6 Bumpless Transfer


Most authors simply state that anti-windup and bumpless transfer are the same.
This section will elaborate on some similarities and discrepancies and illustrate
specific bumpless transfer techniques. Consider the following two statements:
Anti-windup should keep the controller output close the plant input during all
operational circumstances in order to avoid performance degradation and
stability problems upon saturation.
Bumpless Transfer should keep the latent5 (inactive) controllers output close to
the active controllers output (which is the plant input) in order to avoid transients
after switching from one controller to another.
This perspective justifies a unified theoretical treatment of anti-windup and bumpless transfer but from a practical or implementation point of view the two problems
seem somewhat different. The anti-windup problem is very local to the controller
and the actuator while the bumpless transfer problem involves a suite of controllers
possibly with different purposes such as velocity/position, manual/automatic, continuous/discrete, and are likely to have different sampling times. Changing controller parameters on-line is also a bumpless transfer problem.
Actuator saturation is a condition that, even with an AWBT scheme in place, may
exist over a period of time whereas bumpless transfer can be viewed as an initial condition problem consisting of choosing the appropriate initial values for the
5

Along the lines of Graebe and Ahlen (1994) the word latent will be used to denote the idle,
inactive controller.

2.2 / Anti-windup and Bumpless Transfer

31

latent controllers states such that at the point of transfer the latent controllers
output uL equals the active controllers output uA . So, basically, using the words
of Graebe and Ahlen (1996), the essence of bumpless transfer is the desire to
compute the state of a dynamical system so its output will match another signals
value.
Initial Conditions
For a PI controller, with only one state, the initial conditions are easily determined.
Consider the discrete-time PI controller

s0 + s1 q 1
e(k):
(2.70)
1 q 1
At the time of transfer t = kT we aim to have uL (k ) = uA , where uA is the active
controllers output. Trivial calculations give the following expression for uL (k 1)
uL (k) =

which is the only unknown parameter:

uL (k 1) = uA s0 e(k) s1 e(k 1):


The signals e(k ) and e(k 1) are noise infected from the measurement
generally for higher order controllers the noise sensitivity will increase.

(2.71)

and

For a general ordered controller we cannot determine the values of all the states
of the controller at the point of transfer. For a nc -order controller we need a data
history of nc samples. Bumpless transfer requires that the controller states are initialized such that
2
6
6
6
6
6
4

uL (k)
uL (k 1)
..
.

uL (k nc + 1)

7 6
7 6
7 6
7=6
7 6
5 4

uA (k)
uA (k 1)
..
.

uA (k nc + 1)

7
7
7
7;
7
5

(2.72)

or

UL = UA :

(2.73)

For a state space formulated controller

xc(k + 1) = F xc (k) + Ge(k)


uL (k) = Hxc (k) + Je(k);

(2.74)
(2.75)

32

Chapter 2 / Controller Design for Constrained Systems

the state xc must be determined at the time of transfer such that equality (2.73) is
fulfilled. To do so we need to calculate xc (k ijk ); i  0 which gives previous
state vectors based on the current one. From (2.74) we get

xc (k 1jk) = F

(xc (k) Ge(k

1)) ;

and trivial iterations give the following general expression


i 1
X
xc(k ijk) = F i xc(k)
F (i j ) Ge(k j 1);
j =0

(2.76)

xc (kjk) = xc(k):
(2.77)

Using (2.75) we have

uL (k ijk) = Hxc(k ijk) + Je(k i):

(2.78)

If (2.77) and (2.78) are put into (2.73) we get

UA = Hxc(k) + J E;
where

H
HG

6
6
=6
6
6
4

J
6
60
6
6
=6
60
6.
6.
4.

nc

0
J HF 1 G
HF 2 G
..
.

HF

..
.

HG

and

nc

1)

1)

7
7
7
7;
7
5

..

..



e(k)
e(k 1)
..
.

e(k nc + 1)


..

6
6
=6
6
6
4

(2.79)

7
7
7
7;
7
5

..
.

..
.

0
J HF 1 G
HF 2 G J

0
0
HF 1 G

(2.80)

3
7
7
7
7
7:
7
7
7
5

(2.81)

Now, the state vector is easily determined


given full rang of H.

xc(k) = H 1 (UA

J E );

(2.82)

This method has an obvious drawback, however. The applicability of the method
rely upon the access to the latent controller as the controller states must be set to
the estimated parameters at transfer. The next method overcomes this problem.

2.2 / Anti-windup and Bumpless Transfer

33

Dynamic Transfer
Graebe and Ahlen (1994) suggest a method which is developed for bumpless transfer between alternative controllers and is not intended to be used in an anti-windup
context. In Graebe and Ahlen (1996), similarities between anti-windup and bumpless transfer are discussed. It can be argued that the method is a special case of the
generic frameworks but the practical usefulness of the method should give it more
credit.
The scenario is the switch from an active controller CA to a new latent (inactive)
controller CL . The idea is to let the output uL of the latent controller CL track the
output uA of the active controller CA . This is done by using a 2 DoF tracking loop
as shown in Figure 2.8 where uA is the reference to the tracking controller, CL is
regarded as the system to be controlled, and the control plant error (r y ) acts as
a disturbance.

CA

uA

Active Controller

uA

TL

Plant

RL

CL

uL

Latent Controller

SL
Tracking Controller

Figure 2.8. Bumpless transfer using a controller tracking loop.

The tracking loop consists of the triplet (RL ; SL ; TL ) in which TL


is sufficient. The controller output uL from CL is governed by

= SL = 1 often

34

Chapter 2 / Controller Design for Constrained Systems

uL =

TL CL
RL CL
uA +
(r y ):
RL + SL CL
RL + SL CL

(2.83)

Assuming that uL tracks uA , the transfer can now take place without bumps.
Adding another tracking loop for CA tracking CL the transfer can be done biT and
directional. The bi-directional scheme is presented in Figure 2.9 where CA
T
CL denote the active and latent tracking controllers, respectively.

uL
CAT

CA
r +

uA

G
+
+

CL

uL

Tracking loop

CLT

uA

Active control loop

Figure 2.9. Bi-directional bumpless transfer. The active control loop is indicated with bold
lines. Subscripts A and L denote active and latent controller, respectively, and superscript
T indicates tracking controller. At the point of transfer, the four switches (dashed lines) are
activated.

The advantages of this method are:

In principle, no a priori knowledge of the controllers are needed to design


the tracking loops. Only caution is that no time delay exists from input to
output in the system controllers (the tracking controllers are designed for a
proper system).

2.2 / Anti-windup and Bumpless Transfer





35

Access to the controllers is not needed.


Switching from manual control to automatic control and/or from continuoustime control to discrete-time control is possible.
Often simple tracking controllers with TL

= SL = 1 are sufficient.

However, for simple low order controllers, implemented in a way such that access
to internal signals (and overriding of these) is easy, the initial condition approach
is an uncomplicated alternative.

2.2.7 Comparison of Methods


Numerous methods have been mentioned in the previous sections. As seen they
all have similarities since generic frameworks can encompass them all. However,
each method has different properties regarding simplicity, applicability, tuning, etc.
Table 2.3 sums up features and peculiarities of the mentioned methods.
Method

Param.

Advantages

Disadvantages

Classical

t

Easy to use; scalar


parameter

Only works for integral


action (PI/PID controllers);
the controller must be
stable except for the
integrator

Hanus
conditioning
technique

Easy to use; no increase


in controller complexity as
no tuning parameter is
available

No design freedom in
terms of AWBT
parameter; inherent
short-sightedness; cannot
handle time delays from
reference to controller
output; only applicable to
biproper minimum phase
controllers

Conditioning
technique
with
cautiousness

Makes the conditioned


method more flexible and
applicable to biproper
controllers; scalar tuning
parameter with intuitive
interpretation:  = 0
Hanus scheme, and

no AWBT
feedback

No indications on how to
choose  except for trial
and error

!1)

continued on next page

36

Chapter 2 / Controller Design for Constrained Systems

continued from previous page

Method
Generalized
conditioning
technique

Observerbased

Framework
by Kothare
et al. (1994)

Framework
by Edwards
and
Postlethwaite
(1997)

"

Param.

Ff
Hf

Gf
Jf

M or Aaw

1 ; 2

Advantages

Disadvantages

Overcomes the
short-sightedness problem
in Hanus conditioning
technique and can handle
both saturation levels

The filter adds complexity


to the design; the choice
of filter is unspecific

Easy to use, unifies a


number of the different
approaches; intuitively
appealing since the
observer concept is
well-known; some results
exist on stability (Kapoor
et al., 1998) and
robustness (Niu and
Tomizuka, 1998)

No optimal way of
choosing the AWBT
observer gain for a
general system

General frameworkall
other schemes are special
cases; opens up for a
unified treatment and
theory; results on stability

Fairly complex
parameterization; not
clear how to benefit from
the framework with
respect to design
synthesis and analysis

General framework;
simple, only one
parameter; opens up for a
unified treatment and
theory; results on stability

Not immediately suitable


for implementation; the
parameter R changes
from a scalar, a matrix and
transfer function
depending on the methods
the framework unifies

Table 2.3: Comparison of AWBT schemes.

2.3 Predictive Controller Design


Predictive control is based on the receding horizon principle in which, at each
sample instance, the current controller output is obtained by solving an open-loop
optimal control problem of finite horizon6 . The input to the controller is current
6

The original formulation of the predictive controller used a finite horizon but since then infinite
horizons have been used for stability proofs.

2.3 / Predictive Controller Design

37

state information and the output is a sequence of future control actions where only
the first element in the sequence is applied to the system.
Predictive control belongs to the class of model-based designs7 where a mathematical model of the system is used to predict the behavior of the system and
calculate the controller output such that a given control criterion is minimized, see
Figure 2.10.
System model
Criterion parameters

Solve

= arg minu J

System

Figure 2.10. Model and criterion based control for a possibly time-varying system.

For some time the predictive control strategy has been a popular choice in constrained control systems because of its systematic way of handling hard constraints
on system inputs and states. Especially the process industries have applied constrained predictive control as plants here are often sufficiently slowly sampled to
permit implementations.
The theory of predictive control has evolved immensely over the last couple of
decades with many suggestions with related features such as system output prediction. A survey of methods is given in Garca et al. (1989). One version has turned
out to be particularly widely accepted: the now well-known Generalized Predictive
Controller (GPC). It was introduced in Clarke et al. (1987a,b) and was conceptually based on earlier algorithms in Richalet et al. (1978) and Cutler and Ramaker
(1980). Since the appearance of the GPC, many extensions and add-ons have been
suggested. These were unified in Soeterboek (1992) within the so-called Unified
Predictive Controller (UPC). This controller is slightly more flexible in terms of
model structure, criterion selection and parameter tuning than the GPC. Bitmead
et al. (1990) critically exposed generalized predictive control and argued that the
7

Predictive Control is also referred to as Model Predictive Control (MPC).

38

Chapter 2 / Controller Design for Constrained Systems

control problem of unconstrained linear systems is handled just as well by Linear


Quadratic Gaussian (LQG) control design methods.
Initially, predictive control was seen as an alternative to pole placement and generalized minimum-variance control in self-tuning and adaptive systems. The GPC
had such pleasant properties as being applicable to non-minimum phase or openloop unstable systems and systems with unknown dead-time or system order, to
name a few. Later, the predictive controllers ability to handle input and state
constrained systems has received much attention, see for example Mayne et al.
(2000) for a comprehensive survey regarding stability and optimality for constrained predictive control, and an extensive list of references. Straightforward,
Quadratic Programming (QP) can solve the constrained problem (Garca and Morshedi, 1984) but the computational burden may be demanding (Tsang and Clarke,
1988) and therefore QP is primarily suitable for relatively slowly sampled processes. This raise a need for other more widely applicable possibly sub-optimal
solutions with less computational complexity (Poulsen et al., 1999). The trade-off
between closed-loop performance and computational speed is an important issue.

2.3.1 Overview of Criteria Selection


Predictive controllers, that being the GPC, the UPC or others, are conceived by
minimizing a criterion cost function J

J:
uoptimal = arg min
u

(2.84)

This is an alternative to specifying the closed-loop poles, as is the case for the
pole placement controller, or fitting frequency responses, as is the case for lead/lag
controllers. The criterion must be selected so that the called for performance and
robustness are accomplished.
om
The pioneer in this field was the Minimum Variance (MV) controller by Astr
(1970) based on the following criterion,


J (k) = E (y(k + d) r(k + d))2 ;

(2.85)

where y (k ) is the system output, r (k ) is the reference signal, d is the time delay
from input to output, and E fg denotes the expectation. As is well-known this
simple criterion yields controllers only suitable for minimum phase systems as
excessive control signals are not penalized. Consequently, the criterion was slightly
modified,

2.3 / Predictive Controller Design

39

J (k) = E (y(k + d) r(k + d))2 + u(k)2 ;

(2.86)

where u(k ) is the controller output. This criterion is likely to reduce the magnitude
of the control signal but introduces the disadvantage of not allowing non-zero u(k ).
For systems lacking an integrator this leads to steady-state errors when tracking a
non-zero reference.
The GPC criterion introduced in Clarke et al. (1987a) is of the form:

J (k) = E

8
N2
<X
:

j =N1

(y(k + j ) r(k + j ))2 +

Nu
X
j =1

(u(k + j

9
=

1))2 ;
;

(2.87)

subject to the equality constraint

u(k + i) = 0; i = Nu ; : : : ; N2 ;

(2.88)

where N1 is the minimum costing horizon, N2 the maximum costing (or prediction) horizon, Nu the control horizon, and  = 1 q 1 denotes the differencing
operator. Notice, that the minimization now involves several future system and controller outputs. This leads to controllers satisfying a much larger range of systems
than the MV controller. Penalizing only the differenced controller output u(k )
in the criterion, the resulting controller will allow non-zero outputs and reference
tracking is improved.
Although versatile, numerous extensions to the criterion (2.87) have been made. In
Soeterboek (1992) this led to the following unified criterion function

J (k ) = E

8
N2 
<X
:

j =N1

Pn (q 1 )
y(k + j ) r(k + j )
P d (q 1 )
+

NX
2 k 
j =1

2

Qn (q )
u(k + j 1)
Qd (q 1 )
1

9
2 =
;

(2.89)

subject to the equality constraint

(q 1 )

Pn (q 1 )
u(k + i) = 0; i = Nu ; : : : ; N2
Pd (q 1 )

k:

(2.90)

1
n (q 1 )
Special here is the filtering of y (k ) and u(k ) with the filters PPn((qq 1 )) and Q
Qd (q 1 ) ,
d
respectively. They are intended to be used to penalize the signals at different frequencies in order to improve performance and robustness tuning. The role of the

40

Chapter 2 / Controller Design for Constrained Systems

polynomial (q 1 ) is to allow for different constraints on the remaining controller


outputs from the control to the prediction horizon. Two natural choices are available:
1.

(q 1 ) = 1 ) u(k + j ) = 0 for j  Nu .

2.

(q 1 ) = 1 q

) u(k + j ) = u(k + Nu) for j  Nu.

So far, the criteria have been based on a knowledge of the input-output relationship
of the system. Provided a state space model is available, the criterion functions can
exploit this and penalize individual states. A useful criterion in terms of predictive
control could be

J (k) = E

8
N2
<X
:

j =N1

9
=

xT (k + j )Qx(k + j ) + uTf (k + j )W uf (k + j ) ;
;

(2.91)

subject to the equality constraint

u(k + i) = u(k + Nu ); i = Nu ; : : : ; N2 ;

(2.92)

where x(k ) is the system state, Q and W are non-negative definite symmetric user
defined weight matrices of appropriate dimensions, and uf (k ) denote the filtered
control signal. This criterion function does not include deviations of the output
from a reference signal r (k ). One way of overcoming this is to augment the system
state vector with the states of the reference signal by which the criterion (2.91)
again can be made to describe the problem.

2.3.2 Process Models


The criteria portrayed in the previous section are all based on a dynamical model
description. Given a time-invariant linear (or linearized) model, an analytical solution to the minimization problem of the criterion exists and can be solved once and
for all. The predictive controllers in Clarke et al. (1987a) and Soeterboek (1992)
were all based on polynomial input-output models such as the ARIMAX and BoxJenkins while Hansen (1996) has derived the predictive control law based on the
following somewhat more general structure

A(q 1 )y(k) = q

d B (q

)
C (q 1 )
u
(
k
)
+
e(k);
F (q 1 )
D(q 1 )
1

(2.93)

2.3 / Predictive Controller Design

41

with

A(q
B (q
C (q
D (q
F (q

) = 1 + a1 q 1 + a2 q 2 + : : : + ana q na
1
) = b0 + b1 q 1 + b2 q 2 + : : : + bnb q nb
1
) = 1 + c1 q 1 + c2 q 2 + : : : + cnc q nc ;
1
) = 1 + d1 q 1 + d2 q 2 + : : : + dnd q nd ;
1
) = 1 + f1 q 1 + f2q 2 + : : : + fnf q nf ;

(2.94)
(2.95)
(2.96)
(2.97)
(2.98)

and d > 0. The system has input u(k ), output y (k ) and is disturbed by a noise
sequence e(k ) with appropriate characteristics. Table 2.4 shows how this and other
less general models relate.
General Model

ARX

ARMAX

ARIMAX

Box-Jenkins

A
B
C
D
F

A
B

A
B
C

A
B

1
1
1

1
1


1

B
C
D
F

Table 2.4. Process models obtained from the general linear structure.

Although the literature is predominated by the use of input-output descriptions,


other authors have concentrated on a state space approach ((Chisci et al., 1996),
(Bentsman et al., 1994)). Some of the motivations for this are:





Easier approach to MIMO systems.


Complete internal description of the system, including possible internal oscillations or instabilities.
A greater flexibility in describing the disturbance model.

This section shall only consider the discrete-time state space model

x(k + 1) = Ax(k) + Bu(k)


y(k) = Cx(k);

(2.99)

42

Chapter 2 / Controller Design for Constrained Systems

where x 2 Rn is the state vector of the system, u 2 Rm the input, and y 2


R l the output. For simplicity, no noise description is included in the model. The
discrete-time model (2.99) is typically obtained by sampling a continuous-time
plant assuming (due to computer control) that control signals are constant over the
sampling period.
In general, system models are obtained through physical modelling or system identification. In Nrgaard et al. (2000b) neural networks are used for system identification and it is shown how predictive controllers are designed for these models.

2.3.3 Predictions
To solve the predictive control problem posed by minimization of a criterion func^(k + j ))
tion, we need to compute a j -step prediction of y^(k + j ) (or equivalent x
for j = N1 ; N1 + 1; : : : ; N2 . Prediction in the state space model (2.99) is straightforward. At present time k consider the j -step state prediction at time k + j :

x^(k + j jk) = Ax^(k + j 1jk) + Bu(k + j 1jk)


h
i
= Aj x(k) + Aj 1 B    AB B U;

x^(kjk) = x(k)

(2.100)
(2.101)

where
h

U = u(k) u(k + 1)

iT

   u(k + j 1) :

Gathering the j -step predictors for j

= 1; 2; : : : ; N2 we get

6
6
4

7
6
7
6
7
6
7 = 6 . 7 x(k ) + 6
7 6 .. 7
6
5 4
5
4

x^(k + 1jk)
6
6x
6 ^(k + 2jk )
..
.

x^(k + N2 jk)

A
6
7
6 A2 7
6
7
AN2

(2.102)

B
AB
..
.

AN2 1 B

32

0
B

 0

..

..
6
07
54
AB B u(k + N2



..

..

.
.

6
.. 7
6
.7
76
76

u(k)
u(k + 1)
.

1)

7
7
7
7
7
5

(2.103)
or

X^ = F x(k) + GU:
Here N1
null.

= 1 but if this is not the case, the first N1

(2.104)

1 rows of G and F

will be

2.3 / Predictive Controller Design

43

Introducing the control horizon Nu into the criterion changes (2.103) slightly. Define the vector of future controller outputs Uu
h

Uu = u(k) u(k + 1)

iT

   u(k + Nu) ;

(2.105)

and recall that we have set u(k + i) = u(k + Nu ) for i  Nu . This gives us

X^ = F x(k) + Gu Uu ;

(2.106)

with
2

B
AB

6
6
6
6
..
6
.
6
6
6 ANu B
Gu = 6
6 Nu +1
6A
B
6
..
6
6
.
6
4

AN2 1 B

0
B


..

..

..




AB
A2 B

   AN2

..
.

N1 B

0
..
.

0
B
AB + B
N2 P
Nu
i=0

..
.

Ai B

7
7
7
7
7
7
7
7
7:
7
7
7
7
7
7
5

(2.107)

2.3.4 Unconstrained Predictive Control


Typically, a predictive controller is based on a receding horizon principle, that is, at
each sample the controller calculates the sequence u(k ); u(k + 1); : : : ; u(k + Nu )
based on predictions of the system but only the first element (u(k )) is used. At next
sample the whole procedure is repeated.
Predictive Control with Control Horizon
As a first approach, consider the state space criterion function

J (k) = E

8
N2
<X
:

j =N1

9
=

x^T (k + j )Qx^(k + j ) + uT (k + j )W u(k + j ) ;

subject to the equality constraint

(2.108)

44

Chapter 2 / Controller Design for Constrained Systems

u(k + i) = u(k + Nu ); i = Nu ; : : : ; N2 ;

(2.109)

which equals (2.91) except no control output filtering is present. Inserting the prediction (2.106) into the criterion function we get a simple linear algebra problem

J (k) = X^ T IQ X^ + UuT IW Uu ;
(2.110)
where IQ is a block diagonal matrix with N2 instances of Q in the diagonal and
IW likewise with respect to W . Minimizing with respect to Uu yields
Uu = GTu IQGu + IW

GTu IQ F x(k):

(2.111)

Predictive Control with Control Signal Filtering


At times, control signal filtering can improve performance. We assume that the
filtered version uf (k ) of u(k ) can be written in terms of state model,

xu (k + 1) = Au xu (k) + Bu u(k)
uf (k) = Cu xu (k) + Du u(k):
The j -step prediction of uf (k ) is given by
h

uf (k + j ) = Cu Aju xu (k) + Cu Aj 1 B
and stacking the predictions from j

(2.112)

   CuA B Du U;
0

= 0; : : : ; Nu yields

Uf = Ff xu (k) + Gf Uu ;
with

2
6
6
Ff = 6
6
6
4

Cu
Cu Au

..
.
Cu ANu u

(2.113)

7
7
7
7
7
5

6
6
Gf = 6
6
6
4

Du
Cu A0u B

..
.
N
Cu Au u

(2.114)

0
Du

 0
..

..

..

   Cu

.. 7
. 7
7

7:

07
5
Du

(2.115)

Once more we can write the minimization problem in terms of linear algebra,

J (k) = X^ T IQ X^ + UfT IW Uf ;
^ in (2.106) and just found Uf
and using our predictions X
mize with respect to Uu
Uu = GTu IQ Gu + GTf IW Gf

(2.116)
in (2.114) we can mini

GTu IQF x(k) + GTf IW Ff xu (k) :

(2.117)

2.3 / Predictive Controller Design

45

2.3.5 Constrained Predictive Control


Unlike the unconstrained linear problem, an analytic solution rarely exists for the
constrained optimal control problem. Consequently, the constrained minimization
problem must be solved in every sample and efficient fast algorithms are needed.
The most common approach to a solution is applying an iterative Quadratic Programming (QP) method which can minimize a quadratic objective function subject
to linear constraints. Such a problem is described as

1 T
min
u Hu + cu;
u 2

(2.118)

P u  q;

(2.119)

subject to

where P and q describe the constraints.


The unconstrained predictive control problem was solved by minimizing the criterion in (2.110). Based on this, straightforward calculations lead to the QP formulation. First

J (k) = X^ T IQ X^ + UuT IW Uu

= UuT IW + GTu IQGu Uu + 2xT F T IQ Gu Uu + xT F T IQF x:

(2.120)

Constant terms have no influence on for which value of Uu the criterion is minimized which leads to reformulating the criterion, minimized for the same value of
Uu :

1
J  (k) = UuT IW + GTu IQ Gu Uu + xT F T IQGu Uu
2
1
= UuT HUu + cUu ;
2

(2.121)

which is a standard QP criterion function.


Modelling Constraints
Three different kinds of constraints will be considered. Most commonly, constraints acting on a process originate from magnitude limits in the control signal but

46

Chapter 2 / Controller Design for Constrained Systems

also rate limits in the control signal and limits on the states occur. The constraints
can be described by


u(k)
 u ; 8k
 u(k) u(k 1)  u ; 8k

x(k)
 x ; 8k:

umin
umin
xmin

Magnitude:
Rate:
States:

max

(2.122)

max

max

Instead of using the state vector x directly we may introduce a new vector z = Cz x
which contain the constrained combination of states. The following shows how to
model the constraints such that a QP problem can solved.
First we introduce the vector 1m (x) = [x; x; : : : ; x]T of length m. Then, over a
receding horizon Nu , we can describe the magnitude constraints as

Pm Uu  qm ;

with

"

I
Pm =
I

where
get

"

qm =

(2.123)
#

1Nu (umax )
;
1Nu (umin)

(2.124)

I is an identity matrix of dimension Nu  Nu . For the rate constraints we


Pr Uu  qr ;

where

"

(2.125)

P0
Pr = r 0 ;
Pr

and

(2.126)
2

6
6 1
6
6
0
Pr = 6
6 0
6 .
6 .
4 .

0 0
1 0
1 1

 0

..

..

 0

..

..

..

.. 7
.7
7
.. 7
.7

7
7
07
5

1 1

6
6
6
6
6
"
# 6
6
1 (u )
qr = Nu max + 6
6
6
1Nu (umin)
6
6
6
6
6
4

u(k 1)
0
..
.

0
u(k
0
..
.

7
7
7
7
7
7
7
7
7:
1)7
7
7
7
7
7
5

(2.127)

2.3 / Predictive Controller Design

47

^ = ICz (F z (k)+
For the state constraints we use the N2 state predictions Z^ = ICz X
Gu Uu ) where ICz is a block diagonal matrix with instances of Cz in the diagonal.
The state constraints can be described as

Pz Uu  qz ;

(2.128)

with
"

I G
Pz = Cz u
ICz Gu

"

qz =

1N2 (zmax ) ICz F x(k)


;
1N2 (zmin ) + ICz F x(k)

(2.129)

where nz is the number of outputs in z .


In condensed form the constraints can be expressed as

P Uu  q;
with

Pm

6 7
7
=6
4 Pr 5

Pz

(2.130)
2

qm

6 7
7
=6
4 qr 5 :

(2.131)

qz

State Constraints
The constraints on the states of the controlled system are normally imposed for two
reasons:
Safety concern
Certain operating conditions may prevent unnecessary wear and tear of the
installations and hence from a safety viewpoint system states are constrained.
For example, a temperature or a velocity would have to lie within certain
limits to prevent stressing the equipment, and a space telescope normally
have certain exclusion zones towards the sun (regions of the state space)
where the optics may take damage.
Control performance
As shown in Kuznetsov and Clarke (1994) and Camacho and Bordons (1999)
constraints on the controlled output can be used to force the response of the
process to have certain characteristics. By imposing constraints the output

48

Chapter 2 / Controller Design for Constrained Systems


can be specified to produce a response within a band, with maximum (or
none) overshoot, without oscillations, without non-minimum phase behavior, or with a terminal state constraints where the output is forced to follow
the trajectory a number of sampling periods after the horizon N2 .

The specification of a maximum overshoot is desirable in many practical systems.


For example, when controlling a mobile robot, an overshoot may produce a collision with the surroundings. Furthermore, overshooting is related to damping which
is a well understood parameter often used in selecting desired closed-loop poles.
This way, imposing constraints can help visualize and predict the behavior of the
closed-loop predictive controlled system. The actual implementation of overshoot
constraints is easy. Let the output of interest be defined as z (k ) = Cz x(k ). Whenever a change in the reference r (k ) is made (assuming z (k )  r (k )), the following
constraint is added to the system:

z (k + j )  k1 r(k); j = 1; : : : ; N2 ; k1  1;

(2.132)

where k1 determines the maximum allowable size of the overshoot and N2 defines
the horizon during which the overshoot may occur. Using the predictions of x we
have

ICz Gu Uu  1N2 (k1 r(k)) ICz F x(k);

(2.133)

where ICz is a block diagonal matrix with instances of Cz in the diagonal.


For negative reference changes we have in similar way

z (k + j )  k2 r(k); j = 1; : : : ; N2 ; k2  1;

(2.134)

and with the state predictions

ICz Gu Uu  1N2 (k2 r(k)) + ICz F x(k):

(2.135)

Often, constraints on the output signals are soft constraints meaning the output can
physically exceed the imposed constraints and violations can actually be allowed
for short periods of time without jeopardizing safety concerns. Soft constraints
could be handled to some extent by tuning of the controller by adjusting the weights
on the signals of interest in the criterion. But, as seen, imposing constraints may
help in fulfilling closed-loop performance objectives.

2.3 / Predictive Controller Design

49

Constraints and Optimization


We have now stated the constrained quadratic programming problem given by
(2.118) and (2.119) and in searching for a minimum, an iterative optimization algorithm is used. A basic algorithm is described in Algorithm 2.3.1.
Algorithm 2.3.1 Sketch of iterative optimization algorithm
O PTIMIZATION (x0 )
1 i
0
2 while stop criterion is not satisfied
3 do di
search direction
i step in search direction
4
5
xi+1 xi + i di
6
i i+1
7
evaluate stop criterion
8 return optimal solution
The algorithm shows that basically two features identify the optimization algorithm: the search direction, and the step size in the search direction.
Various quadratic programming techniques are available that will solve the stated
minimization problem (see for example Luenberger (1984)) and have been used
in a predictive control context, e.g. by Garca and Morshedi (1984); Camacho
(1993) and Chisci and Zappa (1999). However, QP methods are complex, timeconsuming (likely to be a problem in electro-mechanical real-time applications),
and may lead to numerical problems as pointed out in Rossiter et al. (1998). Thus,
simpler gradient projection methods such as Rosens have been explored (see for
example Bazaraa et al. (1993)) and applied to predictive control applications, e.g.
by Soeterboek (1992) and Hansen (1996). However, in the case of rate constraints,
the complexity of this algorithm increases significantly.
Suboptimal Constrained Predictive Control
Suboptimal constrained predictive control (SCPC) has been the issue of Kouvaritakis et al. (1998) and Poulsen et al. (1999). The search for an optimal solution
is renounced in favor of a decreased computational burden. One such suboptimal
approach is based on a linear interpolation between the unconstrained optimal solution and a solution that is known to be feasible (the solution does not activate

50

Chapter 2 / Controller Design for Constrained Systems

the constraints). The optimization problem is reduced to finding a single interpolation parameter and can be solved explicit. According to the authors, the degree
of optimality for the method is large and the solution benefits from the fact that it
coincides with the unconstrained optimal solution whenever feasible.

2.3.6 Design of a Predictive Controller


Although the handling of constraints for the predictive controller is classified as
embedded, the actual design of the controller may easily be split in two steps where
the first step is the design of a linear predictive controller and the second step
models the constraints and selects a way of solving the constrained optimization
problem.
Soeterboek (1992) has a comprehensive evaluation of how the different design
parameters of the unified predictive controller influence the performance of the
closed-loop control system.

2.4 Nonlinear and Time-Varying Methods


So far, the techniques considered in section 2.2 and 2.3 have been linear and have
generated linear controllers (at least in the linear area of operation). This has some
shortcomings. The anti-windup and bumpless transfer approach does not guarantee control signals within the limits since the compensation is not activated until
a constraint becomes active. The predictive controller, however, does satisfy the
constraints but the computational burden is significant and the actual implementation of the controller is difficult seeking a resource optimal programming of the
algorithms.
Since the nature of constraints itself is nonlinear, one might expect a nonlinear
controller to achieve better performance than a linear one. Few nonlinear control
tools exist for incorporating actuator saturations into the design process as even
simple linear systems subject to actuator saturations generate complex nonlinear
problems. Very few general results exist even for constrained linear systemsas a
matter of fact most results are very specific to a small class of systems. Examples
of such systems are chains of integrators and driftless systems (x_ = f (x)u). Some
significant results on nonlinear bounded control can be found in Teel (1992); Sussmann et al. (1994); Lauvdal et al. (1997); Lauvdal and Murray (1999) and Morin
et al. (1998a). The different approaches have different characteristics, but most of

2.4 / Nonlinear and Time-Varying Methods

51

them rely on a scaling of the controller where the feedback gains converge to zero
as the norm of the states tends to infinity.

2.4.1 Rescaling
One of the more promising ideas explored within the nonlinear approaches is the
so-called dynamic rescaling initially proposed in Morin et al. (1998a). The method
is based on recent results on stabilization of homogeneous systems in MCloskey
and Murray (1997) and Praly (1997) where a rescaling method has been developed
to transform a smooth feedback with (slow) polynomial stability into a homogeneous feedback yielding (fast) exponential stability8 .
In the dynamic rescaling technique a linear stabilizing controller (for a linear system) is rescaled as a function of the state generating a new stabilizing and bounded
control law. The concept is depicted in Figure 2.11.

Dynamic
Rescaling


r
Controller

Actuator

u^

Plant

Figure 2.11. Dynamic rescaling.

The family of systems considered are single-input linear open-loop stable and controllable systems in Rn :

x_ = Ax + Bu;

(2.136)

with A and B given as


8

For a nonlinear system with a null-controllable linearization, exponential stability is not possible
with smooth feedback.

52

Chapter 2 / Controller Design for Constrained Systems


2

6 .
6 .
.
A=6
6
6 0
4

..


a 
1

0
..

0
an

1
an

2 3

7
7
7
7;
7
5

6.7
6.7
.7
=6
6 7:
607
4 5

(2.137)

In general, the bounded control law for this system turns out rather complex (for
instance, the controller for a 4 dimensional system is parameterized by some 12
parameters). Although tedious, the method is interesting as the following example
will illustrate.
For simplicity consider the double integrator

x_ 1 = x2
x_ 2 = u;

(2.138)

stabilized with any linear state feedback controller

u(x) = l1 x1 l2 x2 (l1 ; l2 > 0):

(2.139)

Now, for any  > 0 introduce the scaled controller

u(; x) =

l1
x
2 1

l2
x;
 2

(2.140)

l1 2 1 2
x + x:
4 1 2 2

(2.141)

and the Lyapunov function

V (; x) =

The controller will again be a stabilizing controller for (2.138) since  > 0 and
the Lyapunov function can be shown to be non-increasing along the trajectories of
(2.138)-(2.140). The concept of the rescaling procedure is now to choose  such
that the Lyapunov function (2.141) stays at a chosen contour curve for x tending
to infinity. This will bound the control signal.
Let (x) be a scalar function given by
(

(x) =

solution of V (; x) = 1

Then the rescaled feedback law is given as

V (1; x)  1
otherwise.

(2.142)

2.4 / Nonlinear and Time-Varying Methods


(

u(; x) =

l1
(x)2 x1

53

x=0
x 6= 0:

l2
(x) x2

(2.143)

The equation V (; x) = 1 has, for  > 0, a unique positive solution


s

=

x22 + x42 + 4l1 x21


:
2

(2.144)

This controller ensures global asymptotical stability and boundedness . The latter
property is verified from (2.143) and (2.144):


p
l1
x
=
l1
max
1
2 (x)


l2
x = l2 :
max
(x) 2

(2.145)
(2.146)

The respective minimums have opposite signs. A conservative estimate of the maximum of u is then
max (u(; x)) 

l1 + l2 :

(2.147)

The Lyapunov function V (; x) controls the rescaling process and hence effects the
overall performance of the constrained system. Many possible Lyapunov functions
are available but how to choose the best one is not clear. Another design difficulty
is the fact that the control law is guaranteed bounded but the specific bounds are
not given. Thus, the designer must, by simulation or direct calculations, determine
the bounds and if they are too large or narrow, the Lyaponov function must be
modified.
Adaptive Determination of Bounds
The bounds on the rescaled controller (2.142)-(2.143) are governed by the Lyapunov function equality V (; x) = 1 and are difficult to calculate explicitly except
for a cautious assessment. Furthermore, as for the shown example, the bounds depends on the controller feedback gains li . This dependency is unfortunate as the
method conceptually could be an add-on to an existing stabilizing linear controller
and not an integrated part of the design of the controller. As we will show, a small
extension to the approach will allow the controller to, adaptively, adjust the right

54

Chapter 2 / Controller Design for Constrained Systems

side of the Lyapunov equality such that independently chosen actuator bounds can
be specified.
Consider the equation V (; x) = where V (; x) is defined in (2.141) and
For x 6= 0 this equation has the unique positive solution

=

v
q
u
u 1 x2 +
1
1
2
4
t 2
2 x2 + 4 l1 x1

> 0.

(2.148)

Now choose (x) in the following way


(

(x) =

if V (1; x) 
the solution (2.148) otherwise.

(2.149)

If is increased, larger control signals are allowed before the rescaling is activated
and vice versa if is decreased.
By letting time-varying we can estimate such that a value for is found that
corresponds to the given actuator saturation limits. It is imperative that remains
positive. Let

_ = ju^ uj;

(2.150)

^=
where > 0 is a tuning parameter that governs the convergence rate and u
sat(u) is the saturated control signal. The initial condition (0) should be chosen
large since the adaptation law only allows to become smaller over time.
For the double integrator we can make use of the fact that we know the bounds on
the maximum value for u(; x). For the controller with  given by (2.149) we get
the upper bound


l1 x1
lx
+ max 2 2
max (u(; ; x))  max
2



p p
= l1 + l2 :

(2.151)
(2.152)

Define for an actuator u = min(umax ; jumin j) where umax and umin are the actuator
saturation limits. A lower bound for is then given from max(u(; ; x)) = u:


u
= p
l1 + l2

2

(2.153)

2.4 / Nonlinear and Time-Varying Methods

55

A lower bound for the maximum value of u(; ; x) is given as




l1 x1
lx
max (u(; ; x))  min max
; max 2 2
2


p

p
= min l1 ; l2 :
This leads to an upper bound for :

=u

max



(2.154)
(2.155)

1 1
;
;
l1 l22

(2.156)

which would be a sensible initial value for , that is (0) = .


During convergence of , the control signal will saturate and hence stability of this
extented rescaling approach is no longer strictly guaranteed but simulation studies
promise good performance and stability. The tuning parameter should be chosen
with in mind that a large value will speed up convergence but may cause the reached
bounds to be conservative while a small value will cause the controller to saturate
over at longer period of time.
Simulation Study
To illustrate the adaptive determination of the saturation bounds, consider the double integrator affected by a nasty (unrealistic but for the study interesting) disturbance on the state vector. The system is assumed stabilized by a feedback controller
with gains l1 = 0:5 and l2 = 1 and the input is magnitude saturated with juj  1.
Based on these settings we initialize (0) = 2 and set = 0:5. Figure 2.12 shows
the control signal u(; ; x) and the convergence of . It is seen how u at first saturates but as tends to a smaller value the control signal remains within the limits.

2.4.2 Gain Scheduling


The rescaling approach guarantees stability but performance and even actuator
bounds are difficult to specify a priori. The following will consider a gain scheduling method similar to rescaling but with a different approach to selecting the scaling. The aim is to have good performance of the constrained closed-loop system
in terms of rise time, settling time, and overshoot which often are deteriorated by
constraints. Optimally, these quantities should be as close as possible to those of
the unconstrained system.

56

Chapter 2 / Controller Design for Constrained Systems

Input

1
0
1
0

50

1.5
1
0.5

50
Time (secs)

Figure 2.12. Time response for the saturation scaling factor .

The Scaling Approach


The problem considered is to stabilize a chain of integrators

x_ 1 = x2
x_ 2 = x3

(2.157)

..
.

x_ n = u^;
^ is the input subject to saturation, that
where xi ; i = 1; : : : ; n are the states and u
^ = sat(u). Assume we have designed a linear controller given by
is, u

u = Lx;

L = l1 l2

   ln ;

(2.158)

which stabilizes the chain of integrators when no saturation is present. The objective of the gain scheduling is to modify the state feedback matrix L such that
settling time and in particularly the overshoot are improved during saturation compared to the uncompensated constrained system. The overshoot of a constrained
dynamic time response is mainly governed by the windup up in the controller states
while the settling time is governed by the undamped natural frequency. These observations lead to the following schematic gain scheduling approach:

2.4 / Nonlinear and Time-Varying Methods

57

1. During saturation, schedule the controller such that the closed-loop poles
move towards the origin while keeping a constant damping ratio. This will
decrease the needed control signal and hence reduce windup without increasing the overshoot. Unfortunately, the rise time is increased.
2. When the controller de-saturates, move the poles back towards the original
locations. This will help improve the overall settling time.
A set of complex poles, say p1 and p2 , are in terms of the damping ratio  and the
undamped natural frequency !n given by

p1;2 = !n

 p
 j 1 2 :
2

(2.159)

This shows that it is feasible to schedule the frequency while maintaining the damping ratio with only one parameter. The gain scheduled poles are given by

p1;2 () =

!n


 p
 j 1 2 ;
2

(2.160)

where  2 [1; 1[ is the scaling factor. For  = 1 no scheduling is applied while for
 ! 1 the poles move towards the origin reducing the magnitude of the control
signal.
Applying this scheduling approach to the chain of integrators, we get the following
gain scheduled characteristic polynomial C (s)


C (s) = s +


p1   p2 
p 
s+





s+ n ;




(2.161)

which expanded can be verified to give

l
l
C (s) = sn + n sn 1 + : : : + 1n ;



(2.162)

where li ; i = 1; : : : ; n are the state feedback gains. This leads to the following
definition of the gain scheduled control law

u(; x) = LK ((t))x;

(2.163)

where


K ((t)) = diag  n (t); n 1 (t); : : : ;  1 (t) ;

(2.164)

58

Chapter 2 / Controller Design for Constrained Systems

and (t) is the scaling factor.


For a fixed scaling factor (t) =  we observe that if the original poles pi ; i =
1; : : : ; n are in the open left-half part of the complex plane, then so are the poles
pi (); i = 1; : : : ; n of the resulting linear controller when applying a fixed  2
[1; 1[. This follows from (2.160) where we have pi () = 1 pi .
The nonlinear scheduling in (2.164) has several advantageous properties over a
linear scaling K () =  1 I . To illustrate, Figure 2.13 shows a root locus for a
fourth order integrator chain when using linear and nonlinear scaling, respectively.
The original pole placement is the same for the two solutions but it is seen that
linear scaling may lead to unstable poles. Also note that the damping ratio, as
expected, is kept constant during nonlinear scaling.

1
Imag Axis

Imag Axis

1
1.5

0
Real Axis

(a) Nonlinear scaling

1.5

0
Real Axis

(b) Linear scaling

Figure 2.13. Root locus for a fourth order integrator chain with closed-loop poles as a
function of  for nonlinear (left) and linear (right) scaling.  and indicate nominal
( = 1) and open-loop poles ( ! 1), respectively.

Notice, that for systems with only one or two integrators, linear scaling will also
produce stable poles for all feasible values of . The problem with instability only
exists for dimension three or larger.

2.4 / Nonlinear and Time-Varying Methods

59

The Scaling Factor


Having selected the scaling method we need to specify how to determine the scaling factor . Based on the two previous stated performance observations we suggest
to make the scaling factor partly state partly time depending. Define (t; u) as
8
>
^
< u

_ = 0
>
:

uj


N (

1)

u > umax +  or u < umin 


jumax uj   or jumin uj  

(2.165)

otherwise,

^ denotes the usual saturated control signal. The


where > 0, N > 0,  > 0 and u
first case increases  during saturation which moves the poles towards the origin
and thereby reduces the control signal. Upon de-saturation the third case will force
 back to the nominal value  = 1.  creates a dead-band around the saturation
limits which is introduced to smooth the changes in . Figure 2.14 shows _ as a
function of u.
_ > 0

_ = 0
2

!1

umin

_ = 0 _ > 0
2

umax

Figure 2.14. The choice of _ .

In Figure 2.15 an example of a time response for u and  is shown. During saturation  increases until u is within the specified bounds. In terms of performance it
is desirable to bring  back to unity. This is done at a sufficiently slow rate since,
if done too fast, u will hit hard the other saturation level.
The parameters are easy to choose. The following observation are made:

 depends on system and controller dynamics and saturation limits. Choose


large for fast closed-loop systems with little actuator capacity.

N

governs the rate at which the rescaled controller resets to the original
design. For large control errors, choose a large value for N since too fast
recovery can cause instability. Note, that N large may cost in performance.
A good value seems to be in the range 10 50.

60

Chapter 2 / Controller Design for Constrained Systems

Input

1
0
1
0

40

9
5
1

40
Time (secs)

Figure 2.15. Sketch illustrating the scaling factor when input is magnitude saturated.

Typically, one can choose  to 0

20% of the maximum actuator output.

Extension to Model Class


The chain of integrators considered in the previous section is a somewhat limiting
model class. This section will extend the considered class of systems to include
single input linear systems in lower companion form
2

x_ = Ax + Bu;

6 .
6 .
.
A=6
6
6 0
4

..


a 
1

0
..

0
an

1
an

3
7
7
7
7;
7
5

2 3

6.7
6.7
.7
=6
6 7:
607
4 5

(2.166)

The goal is to find a scaled feedback control law u = L()x with L


(l1 (); : : : ; ln ()) such that the characteristic polynomial of this system yields

C (s) = sn +

a +l
an + ln n 1
s + ::: + 1 n 1;



(2.167)

where li ; i = 1; : : : ; n are the nominal non-scaled linear state feedback gains.


Solving for the scaled feedback gains li () we get:

2.4 / Nonlinear and Time-Varying Methods

61

ai + li
(2.168)
n i+1


) li() = n 1i+1 (n i+1 1)ai + li :
(2.169)
For  ! 1 we have li () ! ai and C (s) ! sn which, as expected, places the
poles in zero.  = 1 restores the nominal control as li () = li .
ai + li () =

Simulation Study
To illustrate the developed gain scheduling approach we shall investigate the properties through a simulation study. Consider an integrator chain of length 4, i.e.

x_ 1 = x2
x_ 2 = x3
x_ 3 = x4
x_ 4 = u
y = x1 ;

(2.170)

y is the output. We assume existence of input unity magnitude limitations


juj  1 and a linear stabilizing controller u = Lx. Initially, we place the four
poles of the linear system such that the undamped natural frequencies are !n = 1
1
and the damping ratios are  = p12 and  = 2p
, respectively, which yields the
2

where

gains

L = 1 p32 3 p32 :

(2.171)

The following simulations assume x(0) = (x1 (0); 2; 0; 0)T , where x1 (0) may vary.
The saturation element causes the linear controller with gains (2.171) to go unstable
for x1 (0) > 1:25. Throughout the simulations,  is set to 0:1. Generally, the
influence is negligible but may help to remove possible chattering in . Note, that
the uncompensated constrained system with no gain scheduling is unstable for all
the following examples.
Figure 2.16 shows a simulation result with initial conditions x1 (0) = 10. For comparison the unconstrained linear response is also shown. It is seen that the scheduled response is slower but the overshoot (undershoot in the case) is similar.
To illustrate the robustness in the choice of the parameters and
and 2.18 shows the effects of varying and N , respectively.

Figure 2.17

62

Chapter 2 / Controller Design for Constrained Systems

Output

15
10
5
0
5

25

25

Input

5
0
5
10
15

1.5

25
Time (secs)

Figure 2.16. Time response with saturation and gain scheduling (solid) using = 1 and
N = 10. For comparison the unconstrained linear response is shown (dashed). The uncompensated constrained response is unstable.

Output

15
10
5
0
5

25
Time (secs)

Figure 2.17. Time response for


(dash-dotted).

N = 10, = 0:2 (solid), = 1 (dashed), and = 3

Output

15
10
5
0
5

10

15

20

25

Time (secs)

Figure 2.18. Time response for


(dash-dotted).

= 1, N = 4 (solid), N = 10 (dashed), and N = 100

2.5 / Comparison of Constrained Control Strategies

63

For all parameter choices the responses are good but for smaller values of
overshoot increases and for large N the response becomes slower.

the

Finally, Figure 2.19 shows robustness for different initial states. The inputs are
scaled so that saturation only occurs during the first seconds. It is seen that particularly for x1 (0) = 100 some oscillations show up. Choosing N larger would reduce
these.

Output

100
50
0
0

25

Input

1
0
1
0

50
Time (secs)

Figure 2.19. Time response for = 1, N


(dashed), and x 1 (0) = 100 (dash-dotted).

= 10, and x 1 (0) = 1 (solid), x1 (0) = 50

2.4.3 Conclusions
The presented rescaling and gain scheduling methods have much in common such
as the way the scaling is incorporated into the controller but the selection of the
scaling/scheduling is different.
The rescaling approach guarantees controller output within the saturation limits
whereas the gain scheduling is more like the AWBT compensation since the scheduler is activated only when constraints become active.

2.5 Comparison of Constrained Control Strategies


The previous three sections have presented four different methods of handling constraints in the control design process

64

Chapter 2 / Controller Design for Constrained Systems


1. Anti-Windup and Bumpless Transfer (Section 2.2).
2. Constrained Predictive Control (Section 2.3).
3. Dynamic Rescaling (Section 2.4.1).
4. Gain scheduling (Section 2.4.2).

The last two approaches are listed separately, although they have some similarities
such as how the scaling is introduced into the linear controller.
Table 2.5 aims to compare the strategies on a variety of properties. Some properties
such as the computational burden of an implementation and the number of design
parameters depend to some degree on the order of the system and the controller
and on which design options are applied. For example, a predictive controller has
a number of optional design parameters (N1 ; N2 ; Nu , weights, filters) but is fairly
independent of the order of the system. On the other hand, a AWBT compensation
is directly linked to the order of the controller. Therefore, some of the entries in
the table are debatable and, in general, the table mainly applies to low ordered
systems.
Anti-Windup
Bumpless
Transfer

Constrained
Predictive
Control

Dynamic
Rescaling

Gain
scheduling

System

Any

Linear,
(possibly
nonlinear)

Chain of
integrators or
companion
form

Chain of
integrators or
companion
form

Controller

Any

Predictive
Controller

State
feedback

State
feedback

Model-based

No

Yes

Yes

Yes

Constraints

Input
Magnitude

Input, Output
Rate,
Magnitude

Input
Magnitude

Input Rate,
Magnitude

Signals
guaranteed
within limits

No

Yes

Yes

No

Specification
of constraints

Easy

Easy

Trial and error,


adaptive

Easy

continued on next page

2.6 / Case Studies

65

continued from previous page

Anti-Windup
Bumpless
Transfer

Constrained
Predictive
Control

Dynamic
Rescaling

Gain
scheduling

Computational
burden

Low

High

Low

Low

Number of
Design
Parameters

Few

Few-Many

Many

Few

Parameter
Understanding

Good

Good

Low

Fair

Stability
Results

Few

Yes

Yes

Few

Constrained
Performance

Depends on
design/Good

Good/Optimal

Fair

Depends on
design/Good

Unconstrained
when feasible

Yes

Yes

Yes

Transient

Industrial implementations

Numerous

Numerous

Few

Few

Documentation

Extensive

Extensive

Rudimentary
Few papers

Rudimentary
Few papers

Table 2.5: Properties of discussed constrained control strategies.

The comparison is far from complete since other constrained control approaches
than those discussed are available in the literature. Examples of such are sliding
mode control, mode-switching systems, specific nonlinear controllers, bang-bang
control, and supplementary gain scheduling approaches. However, they will not
further be discussed.

2.6 Case Studies


Three cases are studied in the following, the last two being of greatest importance.
The first study briefly investigates a constrained single integrator. The last two
studies use a cascaded double tank system to examine the observer-based approach
om and Rundqwist, 1989) and a predictive controller, and aim to verify the
(Astr
usefulness of the procedures and comment on the selection of parameters.

66

Chapter 2 / Controller Design for Constrained Systems

2.6.1 PI Control of Single Integrator


The objectives of this example are twofold: Firstly, we aim to show how actuator
constraints explicitly can be handled in the design of a PI controller if the applied
references and disturbances are known at the design level. Secondly, the example
should demonstrate that this is not a practicable design approach.
For simplicity the system G(s) under consideration is a single integrator and the
controller C (s) is a PI controller. The transfer functions are given as

1
G(s) = ;
s

C (s) = k

1 + i s
;
i s

(2.172)

where k and i are the controller parameters. The transfer function from reference
to controller output is found to be

s(ks + ki )
s(2!n s + !n2 )
u(s)
= H (s) = 2
=
;
r(s)
s + ks + ki s2 + 2!n s + !n2

(2.173)

where
r

k
i
pp
k
=
= 0:5 k i ;
2!n

!n =

(2.174)
(2.175)

and denote the undamped natural frequency and the damping ratio, respectively.
Closed-loop stability is guaranteed for k; i > 0.
Assuming a unity step input r (s) = 1s as worst-case we want to find the maximum
value u(t) can take. Applying the inverse Laplace transform to Hs(s) , yields the
following time response for u(t)

u(t) = e

!n t

1 2 2
sin(!d t) ; t  0;
2!n cos(!d t) + !d
1 2

(2.176)

where
p

!d = !n 1  2 ;
is the damped natural frequency. We have assumed complex poles (
du = 0 for t = t , we get
max
dt

(2.177)

 1). Solving

2.6 / Case Studies

67

1
tmax = arctan
!d

1  2 (1 4 2 )
:
 (3 4 2 )

(2.178)

For 0:5    1 we have tmax  0 which means that maximum for the time
response for u(t); t  0 is for t = 0. For  < 0:5 we have tmax > 0 and maximum
for u(tmax ).
For t = 0 we get u(0) = 2!n = k . In order to avoid saturation,
selected smaller than the smallest saturation level, that is

k  min(jumin j; umax ):

should be

(2.179)

Having selected k we need to select i based on the wanted damping ratio which
governs the overshoot and the natural frequency which governs the rise time. A
small rise time implies a small damping ratio so it is a compromise. The damping
ratio, though, also governs the closed-loop zero ( !2n = 1i which moves to
the right towards the poles for increasing  . This will add overshoot to the step
response.
In case tmax > 0 we can, for a selected  , solve u(tmax ) = min(jumin j; umax ) for !n
and then determine k and i .
This design approach might be extended to higher order systems but still some
obvious disadvantages are present:





The designer need to have an a priori knowledge of worst-case references


and disturbances which may or may never occur in practice. The risk is a
conservative controller.
The degree-of-freedom in the controller design is reduced as the constraints
are considered.
Complexity increases significantly for higher order controllers and systems.

The most compromising draw-back is of course the requirement of a priori knowledge which undermines the concept behind feedback control of dynamic systems.

2.6.2 Anti-Windup Control of Double Tank System with Saturation


This case study aims to demonstrate how the choice of the AWBT compensation
matrix M in an observer-based anti-windup scheme is of great importance for the

68

Chapter 2 / Controller Design for Constrained Systems

closed-loop constrained response. For the double tank system considered, we show
a way of selecting the AWBT parameters such that superior performance is obtained.
om and Rundqwist (1989) and Kapoor et al. (1998) have illustrated antiBoth Astr
windup on the same process, namely a system consisting of two identical cascaded
tanks. The input to the system is the pump speed which determines the flow rate to
the upper tank. The process output considered is the lower tanks fluid level. The
process is shown in Figure 2.20 where a pump transports the fluid from a reservoir
to the upper tank.

upper tank

lower tank

pump

Figure 2.20. Double tank system.

Linearized around an operating point, the double tank can be described by the state
space model
"


x_ =

h

" #

0

x+
u^ = Ax + B u^

0

y = 0 1 x = Cx;

(2.180)
(2.181)

or in the Laplace domain as

y(s) =


u^(s);
(s + )2

(2.182)

with = 0:015 and = 0:05 for the particular system. The states x1 and x2 are the
upper and lower tank levels perturbations around the operating point, respectively,
^
and u is proportional to the flow generated by the pump. The manipulated input u
is restricted to the interval [0; 1].

2.6 / Case Studies

69

A continuous-time 2 DoF PID controller




s
y) + d d (r y) ;
1+ Ns

1
u = K br y + (r
si

(2.183)

is fitted to the system. A state space realization of the PID controller with observerbased anti-windup as described in Section 2.2.3 is given as
"

"

"

"

0 0
1
1
m
x_ c =
xc + KN r
y + 1 (^u u)
N
KN
0 d
m2
d
d
= F xc + Gr r Gy y + M (^u u)
h

u = Ki N xc + K (N + b)r K (N + 1)y = Hxc + Jr r Jy y


u^ = sat(u);

(2.184)
(2.185)
(2.186)
(2.187)

M is the anti-windup compensation. The controller parameters are set to


K = 5; i = 40; d = 15; N = 5, and b = 0:3 such that the unsaturated closed-

where

loop system has poles at

0:0257  i0:0386; 0:2549; 0:0569;

(2.188)

and exhibits satisfactory performance to a step on the reference.


The following will make a fair comparison of the performance for different choices
of the AWBT gain matrix M and to extract general design rules based on the
studys results.
Selection of Anti-Windup Gain Matrix
The objective of this experiment is to evaluate AWBT settings and determine optimal AWBT setting for various references and disturbances. The experiment is
conducted as follows.
1.

t = 0: Reference step r = 1, process and controller start from stationarity


with x(0) = 0 and xc (0) = 0.

2.

t = 300: Impulse disturbance: state x2 is changed to 1.5.

3.

t = 600: Load disturbance on u with magnitude 0:65.

70

Chapter 2 / Controller Design for Constrained Systems

Output

1.5
1
0.5
0

300

600

900

Time (secs)

Figure 2.21. Time response for the unconstrained system (solid) and the constrained system
without anti-windup (dashed).

For the experiment Figure 2.21 shows the output of the closed-loop system without saturation and for the closed-loop system with saturation but no anti-windup
compensation. The response with saturation is obviously poor with unsatisfactory
overshoots and oscillations. A necessary condition for stability of the constrained
closed-loop system is stable eigenvalues of the matrix
"

FM = F

MH =

m1 K
i
m2 K
i

m1 N
;
m2 N Nd

(2.189)

which describes the closed-loop dynamics of the controller during saturation. It is,
however, not a sufficient condition. As an example of that, choose M such that the
eigenvalues of FM are ( 0:0020  i0:0198) and expose the saturated tank system
to a unity reference step. Figure 2.22 shows the marginally stable result.

Output

2
1
0

300

600

900

Time (secs)

Figure 2.22. The marginal stable response for the constrained system with a saturation-wise
stable controller.

2.6 / Case Studies

71

om and Rundqwist
For the double tank system with the described experiment Astr
(1989) have considered three different choices of M which place the eigenvalues of the matrix FM at ( 0:05; 0:05); ( 0:10; 0:10), and ( 0:15; 0:15). As
om and Rundqwist (1989) only the first choice of M gives satpointed out in Astr
isfactory results for the impulse disturbance and hence only the first choice will
be considered here. Kapoor et al. (1998) advocates for choosing the eigenvalues of
FM at ( 0:2549; 0:0569) which guarantees stability for feasible references. Using Hanus conditioning technique described in section 2.2.4 we get M = Gr Jr 1
which gives the eigenvalues ( 0:0118  i0:0379) of FM . Finally, we suggest
choosing the eigenvalues of FM equal to the two slowest poles of the closed-loop
system, namely ( 0:0257  i0:0386). We will refer to these specific anti-windup
om, Kapoor, Hanus, and the slowest
matrices as Ma ; Mk ; Mh and Ms for the Astr
poles design, respectively.
Figure 2.23 shows input and output profiles for the four design choices. All four
choices improve the response, although Hanus conditioning technique tends to
oscillate since the control signal after de-saturation immediately saturates at the
other level. This is the inherent short-sightedness mentioned in section 2.2.4. The
performance of the system exposed to the load disturbance is almost identical for
all gain selections and will not be investigated any further.
(a) 1.5

(b) 1.5

0.5

0.5

0
0

300

600

900

(c) 1.5

(d) 1.5

0.5

0.5

300

600

900

300
600
Time (secs)

900

0
0

300
600
Time (secs)

900

Figure 2.23. Time response for the constrained system for different choices of the antiwindup gain matrix. The unconstrained response (dashed), the constrained response
(solid), and the input (dotted) are shown for anti-windup gains (a) M a , (b) Mk , (c) Mh ,
and (d) Ms .

72

Chapter 2 / Controller Design for Constrained Systems

Since the purpose of AWBT is to reduce the effects from saturation (maintaining
stability and good performance of the closed-loop system under saturation) we will
consider the error between the constrained and the unconstrained time response.
Define the output error y~ = y yu where y and yu are the constrained and unconstrained outputs, respectively. The following performance index is considered

I=

N2

N2
X
1
jy~(i)j;
N1 + 1 i=N

(2.190)

where y~(i) is the sampled error. To ease the comparison of the different designs the
generated performance indices from an experiment are normed with the smallest
index from the experiment. This implies that the best design choice for a specific
experiment will have a normed error equal to 1. Note, the normed errors from one
experiment to another cannot be compared.
The four designs are compared in Table 2.6 for the full experiment as well as for the
different sub-experiments. Notice that the slowest poles selection yields a superior
performance under all tried conditions while Kapoors selection performs the worst
except with regards to the load disturbance where Hanus selection takes the credit.

Experiments
Design

Full exp.

Reference

Impulse

Load

Ma
Mk
Mh
Ms

1.11

1.11

1.11

1.07

1.34

1.50

1.24

1.04

1.17

1.12

1.21

1.23

1.00

1.00

1.00

1.00

no AWBT

2.19

2.44

1.96

1.93

min(I )

0.0478

0.0696

0.0582

0.0155

Table 2.6. Normed errors between constrained and unconstrained response exposed to reference step, impulse disturbance, and load disturbance.

The conditioning technique with cautiousness mentioned in 2.2.4 may help to improve Hanus conditioning technique. Figure 2.24 shows a root locus for the poles
of FM using cautiousness. We see that cautiousness does not give the designer full

2.6 / Case Studies

73

control of the pole locations and cautiousness cannot move the poles to neither the
om design nor the slowest poles design.
locations of the Astr

Imag Axis

0.04
0
0.04
0.3

0.15
Real Axis

Figure 2.24. Eigenvalues of F M = F M ()H using the conditioning technique with


cautiousness and  varying from  = 0 (o) to  ! 1 (). Eigenvalues for other
designs are also shown: M a (), Mk (), and M s (+).

Robustness to Reference Steps


The objective of this part of the study is to investigate the robustness to different
sized reference steps.
and u
^ 2 [0; 1]. Hence, the feasible
The input-output gain of the process is = 10
3
references for which the constraints are inactive in stationarity are restricted to the
interval [0; 10
] assuming no load disturbance on u is present.
3
The setting for the experiment is:
1.

t = 0: Reference step r 2 [0:5; 1; 1:5; 2; 2:5; 3], process and controller start
from stationarity with x(0) = 0 and xc (0) = 0.

Table 2.7 shows the normed errors for the experiment. For all reference steps Ms
yields the minimum error between the constrained and unconstrained system. Especially for small step sizes we note a significant proportional improvement from
a good selected anti-windup matrix. This suggests that anti-windup design should
not only be considered for large saturations but anytime saturation occurs in the actuators. For large saturations, the greatest concern is to maintain stability, whereas
for small saturations it is more a matter of performance.

74

Chapter 2 / Controller Design for Constrained Systems

Experiments
Design

r = 0:5

r=1

r = 1:5

r=2

r = 2:5

r=3

Ma
Mk
Mh
Ms

1.33

1.11

1.09

1.04

1.03

1.01

2.21

1.50

1.34

1.22

1.13

1.08

1.76

1.12

1.26

1.12

1.02

1.01

1.00

1.00

1.00

1.00

1.00

1.00

no AWBT

1.99

2.46

2.56

2.20

2.02

1.36

min(I )

0.0098

0.0348

0.0728

0.1434

0.2500

0.4009

Table 2.7. Normed errors between constrained and unconstrained response to reference
steps.

Robustness to Impulse Disturbances


The objective of this part of the study is to investigate the robustness to different
sized impulse disturbances on x2 . The settings for the experiment are:
1.

t = 0: Reference step r = 1, process and controller start from stationarity


with x(0) = 0 and xc (0) = 0.

2.

t = 300: Impulse
[0:1; 0:5; 1; 1:5; 2].

disturbance:

x2 (t) = x2 (t ) + x2 ,

Table 2.8 shows the normed errors for the experiment. where
the system with the best performance in all cases.

where

x2

Ms again provides

Pole Placement Controller


So far a 2 DoF PID controller has been used which places three of the four closedloop poles at almost the same frequency. We have seen that placing the anti-windup
compensated poles of the controller during saturation at the same location as the
two slowest of the closed-loop poles give superior performance for various reference steps and disturbances.

2.6 / Case Studies

75

Experiments
Design

Ma
Mk
Mh
Ms

x2 = 0:1 x2 = 0:5 x2 = 1 x2 = 1:5 x2 = 2


1.24

1.11

1.11

1.10

1.08

2.21

1.24

1.09

1.06

1.05

1.05

1.21

1.35

1.38

1.38

1.00

1.00

1.00

1.00

1.00

no AWBT

2.26

1.96

1.97

1.98

2.00

min(I )

0.0024

0.0291

0.0698

0.1127

0.1568

Table 2.8. Normed errors between constrained and unconstrained response to impulse disturbances on x 2 .

The following seeks to investigate how a change in the location of the closed-loop
poles influences the selection of the anti-windup matrix. In order to get a better
flexibility in placing the closed-loop poles we exchange the PID controller with a
pole placement strategy parameterized by the controller

R(s)u(s) = T (s)r(s) S (s)y(s) + M (s)(^u(s) u(s));

(2.191)

with a desired closed-loop characteristic polynomial Acl (s) = Ao (s)Ac (s) where
a4 A (s)
Ao (s) = s2 + a1 s + a2 and Ac = s2 + a3 s + a4 . We choose T (s) =
o
which gives the simple reference to output transfer function

y(s) =

a4
r(s):
s2 + a3 s + a4

(2.192)

For the double tank system we need to solve the so-called Diophantine equation

Acl (s) = (s + )2 R(s) + S (s):

(2.193)

For a second order controller with integral action we have R(s) = s(s + r1 ) and
S (s) = s0 s2 + s1 s + s2 and by identifying coefficients of powers of equal degree
we find the solution

76

Chapter 2 / Controller Design for Constrained Systems

r1 = a1 + a3 2
aa
s2 = 2 4

2 (a1 + a3 ) + 2 3 + a2 a3 + a1 a4
s1 =

a1 a3 2 (a1 + a3 ) + 3 2 + a2 + a4
s0 =
:

(2.194)
(2.195)
(2.196)
(2.197)

The previous full experiment is repeated with an unity reference step and an impulse and load disturbance. The closed-loop poles of the unconstrained system are
placed at

0:0257  i0:0386; 0:2549; 0:2549;

(2.198)

which are the same locations as with the PID controller except that both real observer poles are now placed at 0:2549. The response of the unconstrained system
exposed to a reference step yields a 12% overshoot due to the relatively undamped
poles of the original PID controller design but the responses to the impulse and load
disturbances are improved. Figure 2.25 compares the unconstrained responses of
the PID controlled and the pole placement controlled double tank exposed to the
full experiment.

Output

1.5
1
0.5
0
0

250

500

750

Figure 2.25. Time response for the unconstrained system with a pole placement controller
(solid) and with a PID controller (dashed).

Four different designs are considered:


1. Kapoor/Hanus, Mk : eig(FM ) = ( 0:2549; 0:2549). Both designs place
the poles of FM at the location of the two observer poles.
2. Slowest poles, Ms : eig(FM ) = (

0:0257  i0:0386).

2.6 / Case Studies

77

3. Compensated slowest poles, Mc : eig(FM ) = ( 0:0744  i0:0294). The


relocation of one of the observer poles to 0:2549 with the resulting faster
dynamics suggests that instead of placing the anti-windup poles exactly at
the slow controller poles, we compensate the faster dynamics by moving the
poles further to left towards that new observer pole. Here we have almost
doubled the frequency and damping compared to that of the slow controller
poles.
4. No AWBT: eig(FM ) = (0;

0:5313).

In Figure 2.26 the time responses for the different designs are shown while Table 2.9 lists the resulting normed errors for the experiment. Overall, the compensated design performs slightly better than the slowest poles design.
(a)

(b)

1.5
1

0.5

0.5

0
0

(c)

1.5

300

600
(d)

1.5

300

600

300
Time (secs)

600

1.5

0.5

0.5

0
0

300
Time (secs)

600

Figure 2.26. Time response for the constrained system with pole placement controller for
different choices of the anti-windup gain matrix. The unconstrained response (dashed), the
constrained response (solid), and the input (dotted) are shown for anti-windup gains (a)
Mk , (b) Ms , (c) Mc, and (d) no AWBT.

Conclusions
A second order double tank system with saturating input has been considered. A
second order PID controller and a second order pole placement controller with

78

Chapter 2 / Controller Design for Constrained Systems

Experiments
Design

Full exp.

Reference

Impulse

Load

Mk
Ms
Mc

1.26

1.01

1.89

1.31

1.12

1.41

1.00

1.00

1.00

1.00

1.22

1.22

no AWBT

2.04

3.06

1.15

2.08

min(I )

0.0551

0.0841

0.0617

0.0048

Table 2.9. Normed errors between constrained and unconstrained response with pole placement controller exposed to reference step, impulse disturbance, and load disturbance.

integral action have been fitted to the system. The following conclusions regarding
anti-windup design are drawn:

For a controller dynamically described by the matrices F; G; H; J , the matrix


FM = F MH governs the states of the controller during saturation, and the
selection of the anti-windup matrix M is of great importance to the closedloop constrained response.

Placing the poles of FM at the location of the two slowest closed-loop poles
seems to give superior performance if at least one of the two remaining observer poles are not too far away.




When both of the two observer poles are significantly faster than the controller poles, the anti-windup poles should reflect this and should be moved
to the left towards the faster poles.
The performance improvement from a well-designed anti-windup matrix is
significant. Notably, for small saturations, the improvement is proportionally
largest.

2.6.3 Constrained Predictive Control of Double Tank with Magnitude


and Rate Saturation
The following case study aims to illustrate how predictive control can handle constraints in a natural and systematic way. The double tank system from section 2.6.2
is used to compare AWBT with constrained predictive control.

2.6 / Case Studies

79

It is paramount that the resulting controller has integral action so that the controller
can compensate for unknown disturbances. Since the system has no integrators,
the criterion should only penalize changes in the control signal which leads to the
criterion function

J (k ) =

N2
X
j =N1

(y(k + j ) r(k + j ))2 + 

Nu
X
j =0

u(k + j )2 :

(2.199)

This, however, does not guarantee a zero steady-state error if for example the control signal is disturbed by a constant load. Hence, the system has to be augmented
with an integral state (z (k +1) = z (k )+ r (k ) y (k )) in order to accomplish real
integral action in the controller.
The following controller parameters have been chosen: N2 = 50; N1 = 1,
Nu = 49, and  = 2:25 which gives an unconstrained time response with similar
performance to the pole placement controller used in section 2.6.2 when exposed
to the full experiment in the said section.
Magnitude and Rate Constraints
The virtue of the predictive controller is its ability not only to handle magnitude
constraints but also rate constraints on the actuator output. If only magnitude constraints are present, the performance of the constrained predictive controller is similar to that of the best of the AWBT compensated controllers in section 2.6.2. However, since AWBT feedback does not compensate for rate saturation, the predictive
controller is superior. This will be illustrated using the experiment with a reference
step, an impulse disturbance on the lower tank, and a load disturbance on the upper
tank. The input is in the interval [0; 1] and it is assumed that the full speed actuator
change from 0 to 1 or vice versa takes minimum 3 seconds.
Figure 2.27 shows the response for the predictive controller and the AWBT compensated pole placement controller. As expected, the predictive controller compensates for the rate saturation and avoids the excessive overshoots that characterizes the AWBT compensated controller. Note, that the predictive controllers
ability to react on reference changes ahead of time is not used in this simulation.
The quadratic minimization problem subject to constraints was solved using qp()
from MATLAB.

80

Chapter 2 / Controller Design for Constrained Systems

Output

1.5
1
0.5
0
0

250

250

500

750

500

750

Input

1
0.5
0
Time (secs)

Figure 2.27. Time response for constrained predictive controller (solid) and AWBT compensated pole placement controller (dashed), both with magnitude and rate saturation in
the actuator.

Overshoot
It was described in section 2.3.5 how constraints on the output variables of a system could be used to shape the closed-loop response. In Figure 2.28 is shown the
response of the constrained predictive controller exposed to step reference changes.
The actuator is magnitude and rate saturated and the system output has been constrained such that no overshoots exists. The controller here has knowledge of reference changes ahead of time.

2.7 Summary
This chapter addressed the design of controllers in the presence of constraints.
Three different principal strategies was explored. Section 2.2 sketched several ways
of applying AWBT compensation to an existing linear controller as a retro-fitted
remedy to input constraints.
In section 2.3 the predictive controller was outlined and its methodical approach to
constraints described. At present, the constrained predictive controller appears to

2.7 / Summary

81

Output

0.5

0
0

250

500

250
Time (secs)

500

Input

Figure 2.28. Time response for output constrained predictive controller (solid) exposed to
reference changes (dashed). The output is constrained such that no overshoots exists.

give the most flexible and versatile management of constraints including input and
output magnitude as well as rate constraints.
Section 2.4 presented nonlinear approaches characterized by being specific to input constraints and certain classes of linear systems. It was pointed out that the
dynamic rescaling method by Morin et al. (1998a) makes the determination of the
controller bounds difficult and hence an adaptive estimation of the the bounds was
presented. A nonlinear gain scheduling approach was presented. The method ensures a constant damping ratio and stable poles during the scheduling. A simulation
study revealed good performance of the method.
In Section 2.5 a comparison of the presented strategies was made.
Section 2.6 contains case studies and investigated in particular the observer-based
AWBT approach and the predictive controller applied to a double tank system. It
was argued that the parameter selection for the AWBT compensation is of great
importance for the overall performance. Specifically, the poles of the constrained
controller should be chosen close to the slowest poles of the unconstrained closedloop system. The predictive controllers management of constraints is functional.
Imposing constraints on the output of the system is a straightforward and useful
way of forming the closed-loop response.

Chapter

Robot Controller Architecture

This subject of this chapter is robotic control systems and how to incorporate
closed-loop constraint-handling into a real-time trajectory generation and execution system.
The term a robotic system is widely used to describe a diversity of automatic control systems such as industrial robots, mobile robots (for land, air, or sea use), and
mechanical manipulators. Robotics cover a large area of applications ranging from
manufacturing tasks such as welding, batch assembly, inspection, and order picking, to laboratory automation, agricultural applications, toxic waste cleaning, and
space or underwater automation.
The word robot originates from Czech and translates to slave or forced labour.
Originally, the robot was introduced as a human-like machine carrying out mechanical work but robotics have since earned a more versatile interpretation. Two
modern definitions of a robotic system are given in McKerrow (1991).
The Robot Institute of America:
A reprogrammable multi-function manipulator designed to move material, parts, or specialized devices through variable programmed motions for the performance of a variety of tasks.

84

Chapter 3 / Robot Controller Architecture

The Japanese Industrial Robot Association:


Any device which replaces human labour.
The first definition seems to be mainly concerned with production robots such as
mechanical manipulators whereas the second definition has a false ring of worker
displacement being the overall objective of automation. Definitions are always
debatable but in general a robot is considered a general-purpose, programmable
machine with certain human-like characteristics such as intelligence (judgment,
reasoning) and sensing capabilities of the settings but not necessarily human-like
appearance.
Three functionalities are seeked integrated in a robotic system such that it may
move and respond to sensory input at an increasing level of competence: i) perception which help the robot gather informations of the work environment and itself,
ii) reasoning which makes the robot capable of processing informations and making knowledge-based decisions, and iii) control with which the robot interacts with
objects and the surroundings. Hence, the complexity of a robotic control system is
often large and will typically exhibit some degree of autonomy, i.e. the capability
to adapt to and act on changes or uncertainties in the environment (Lildballe, 1999)
in terms of task planning, execution, and exception handling.
The focus of this chapter is to identify control relevant constraints in the robotic
control system and discuss how and where to handle these. From a control point of
view, some generic elements of a robot control system such as motion controllers,
trajectory planning and execution, task planning, sensors, and data processing are
described. The problems with off-line trajectory planning and non-sensorbased execution are discussed and suggestions for on-line trajectory control are made. The
emphasis is put on the layout of such a system while the actual control algorithms
to some degree are investigated in chapter 4.

3.1 Elements of a Robot Control System


Figure 3.1 shows a typical structure of a robotic control system consisting of the
three functionalities perception, reasoning, and control as mentioned before. As
indicated in the figure with the dashed and overlapping line, reasoning in the robot
may take a more distributed form than control and perception. A great part of this
chapter is concerned with integrating path planning and trajectory control which

3.1 / Elements of a Robot Control System

85

Reasoning
Management

Supervisoring

Learning

Planning

Obstacle avoidance

Control

Perception
Modelling

Trajectory Generation and Execution

Localization

Motion control

Sensor fusion

Actuators

Sensors

Environment
Robot motion

Figure 3.1. General robotic control system.

clearly advance some of the intelligence to the controller. Likewise, sensors may include some intelligence in terms of for instance self-calibration and fault-detection.
Each functionality can be decomposed into a set of functional subsystems where
some are mechanical/electrical such as sensors and actuators, some are control
algorithms such as motion control, trajectory execution, and sensor fusion, and
others are integration of knowledge and high level task generation and decision
making such as planning and supervisoring. A short description is given:
Actuators
Robot motion is accomplished by electrical, hydraulic, or pneumatic actuators and consist of devices such as motors, valves, piston cylinders, and chain
drives.
Sensors
Sensors provide measurements of variables within the robot (internal sensors) and from the environment (external sensors) used in various tasks
such as controlling, guidance, and planning. The sensing devices include
encoders, potentiometers, tachometers, accelerometers, strain gauges, laser

86

Chapter 3 / Robot Controller Architecture


range scanners, ultrasonic sensors, sonars, cameras, and GPS (Global Positioning System) receivers.

Sensor fusion, data processing, and modelling


An important task in robotics is the analysis of sensed data. Through data
processing and sensor fusion the robot builds an interpretation of the world
and its own state. Some sensor systems such as vision require processing
for extraction of relevant information. The purpose of sensor fusion is to
increase the consistency and robustness of data, to reduce noise and incompleteness of sensory output, and in particular, to allow different sensors to
supply information of the same situation. The tool is combining signals from
several sensors and signals with different time bases with an a priori awareness. The goal is to model relevant features of the environment and to locate
the robot within it (localization).
Motion controllers
The motion controllers command the robots actuators such that the intentioned motion is achieved. Control is concentrated within three different approaches: 1) joint space control where references are joint coordinates, 2)
task space control where references are Cartesian coordinates, and 3) force
control.
Trajectory generation and execution
Based on a planned motion the trajectory generator converts the path into
a time sequence of robot configurations. This involves specifying velocity
and acceleration profiles. The configurations are translated into motion controller references and are consequently fed to the controllers. In sensor-based
trajectory control (e.g. visual servoing) the references are generated online.
Reasoning
Technically, this is were the main part of a robotic systems intelligence is
located. The objective is to enable the robot to plan and supervise various actions and in particular to react to changing and uncertain events in the robots
work environment and to some degree in the robot itself (fault-tolerance).
This short review of the elements of a robot system leads to an examination of the
constraints acting on the system.

3.2 / Trajectory Generation and Execution

87

3.1.1 Constraints
Constraints in a complex system such as a robot are numerous ranging from computer power and actuators to payload capacity and safety concerns. Most relevant
for the control system are constraints associated with dynamic conditions during
operation. These include:





Saturation in actuators.
State and system output constraints such as minimum/maximum velocities
and accelerations, mechanical stops, obstacles, and room dividers.
Constraints on reference velocities and accelerations. Basically, these originate from the actuators but they may also be imposed by the operator. Normally, they are specified and considered off-line during the trajectory planning. However, on-line trajectory controllers must take these constraints into
account.

The input, state, and output constraints are in general difficult to handle during the
design of the control system and even during the planning phase of a new motion
task. Thus on-line compensators should be present.

3.2 Trajectory Generation and Execution


In robotics a common task is motion along a predefined path. A robot solves tasks
and missions by means of executing a sequence of motions in a workspace. Due to
limited actuator capabilities and mechanical wear from rough and jerky motion it
is disadvantageous and unrealistic to reach an arbitrary target position within one
sample interval, equivalent to applying a step reference. Often the motion from one
position to another must be controlled in terms of intermediate positions, velocities,
and accelerations in space. Thus, the motion of the robot should be planned as a
function of time known as trajectory generation. A trajectory represents the time
history of positions, velocities and accelerations desired to achieve a particular
motion.
A number of challenges exists in specifying trajectories and applying these to a
robot control system. Typically, trajectories are generated in an off-line fashion.
This calls for a structured deterministic and static environment and a perfect modelling of the robot itself such that the motion can be planned in a collision free

88

Chapter 3 / Robot Controller Architecture

manner. Having calculated an off-line trajectory, it is executed by feeding the references to the robots joint or wheel controllers at a certain constant rate. The robot
is assumed capable of following the references at the necessary rate. This implies
that the actuators must not saturate during execution. Usually, this is fulfilled by
making the references conservative regarding the actuators limits, i.e. the motion
is unnecessary slow and the path is smooth.
In many applications, though, speed is a cost parameter and it is natural to look
for time optimal solutions that employ the full range of the actuators. This points
to appropriate handling of saturating actuators. Further, environments are rarely
static and a priori known but rather dynamic and only gradually becoming known
as the robot explores or carries out instructions. Based on these considerations, this
section looks at real-time trajectory generation and execution schemes allowing
for more intelligent robots able to react on changing work conditions or uncertain
events (e.g. new obstacles) by controlling the trajectory generation and execution
in a closed-loop fashion either by modifying the trajectory on-line or aborting the
execution and commence replanning.
Since some events in their nature are unpredictable and may appear at any given
time, an integration of generation and execution is necessary such that the robot
in a closed-loop fashion may react to sensed uncertain events without aborting the
motion and embarking on replanning or switching to some contingency plan.
Given saturating actuators both the path following and desired velocity cannot be
obtained: either the position is maintained at lower speed, the velocity is maintained
following a different path or finally some mix of the two. In such a situation it is
of importance to know exactly whether the position or the velocity is maintained.
In case of obstacles staying on the collision-free path has first priority while the
speed at which the path is followed is less important. In other applications time
is money and speed has first priority. Therefore, we need a way to specify/control
how saturating actuators will influence the actual trajectory of the robot.
The following discusses some considerations regarding trajectory generation. Some
existing schemes are presented.

3.2.1 Design Considerations


In trajectory generation and motion planning, the robot (or the system designer/operator) has to consider a number of design criteria. Some of these are:

Path optimization with respect to

3.2 / Trajectory Generation and Execution

89

minimum time
minimum energy
sufficient actuator torque to handle modelling errors and disturbances.
The solution should take into account actuator constraints as well as system
and controller dynamics.







Task-induced path constraints. For example, a welding robot must maintain


the torch in contact with the material and therefore the robot is constrained
to track the path.
Collision free motion/obstacle avoidance. The obstacles can be static, i.e.
all obstacles are initially known and are fixed in location or they can be
dynamic, i.e. known obstacles may move over time and new obstacles may
become known at run-time as new sensor information is available.
Smooth motion with no discontinuities in position and velocity. This is important in cases where the trajectory is recalculated without stopping the
motion. See Craig (1989) for details on smooth trajectories.
Unknown or moving target position. Real-time generation is necessary.
Maximum velocity or acceleration in order to minimize hazardous motion
and mechanical wear.

The relevance of these criteria depends on the robots application and the structure
of the work space. For instance, for an AGV (Autonomous Guided Vehicle) used
in a warehouse facility collision free motion is of major concern and temporary
deviations from the nominal path are acceptable whereas for a welding robot the
path tracking has first priority.
Some of these objectives can be conflicting. For example in case of saturating actuators the robot must choose whether to follow the path as good as possible or
whether to keep the velocity and get to the end target in minimum time. This, of
course, depends on the application but through the closed-loop trajectory control
system it should be possible to specify the conditions and the choice.

90

Chapter 3 / Robot Controller Architecture

3.2.2 Specification of the Trajectory


Usually, the trajectory is either specified in the joint space or in the Cartesian space
also known as the configuration space. In the joint space the references are functions of the joint angles for a robot arm. For example, for a mobile robot the references are either functions of the left and right wheel velocities (vl ; vr ) or the
angular and forward velocities (v; ! ). The relation is given as (see Section 4.3
more details):

!=

vr

vl

b
vr + vl
;
v=
2

where b is the wheel base. In the Cartesian space the references are functions of
the two or three dimensional path which the end effector (or mobile robot) must
follow.
Both approaches have advantages. Joint space schemes are usually the easiest to
compute as the motion controllers take joint space variables as references but it
can be difficult to generate smooth straight motions through space. Most tasks are
defined in terms of a path in the configuration space. Furthermore, the environment
including obstacles is described with respect to this frame. Cartesian space generated references must be translated to joint space references for the controllers. This
involves inverse kinematics. The main problem here is redundancy which means
that one point can be reached with different joint configurations, see Craig (1989)
or McKerrow (1991) for more details. For mobile robots with nonholonomic constraints, especially stabilization of the robot to a given posture in the configuration
space makes generating the necessary joint references difficult.

3.2.3 Existing Schemes for Trajectory Control


Traditionally, trajectories are generated off-line. Based on initial and desired target positions and velocities and the time constraints on the motion, the history of
references can be calculated and fed to the robot controllers at a given rate. This approach assumes a static structured environment with non-saturating actuators, such
that the robot has no problem in following the reference. In case of unexpected
events the motion is stopped and replanned. Hence, from the time of planning, the
execution is open-loop or merely a storage of a predefined plan.

3.2 / Trajectory Generation and Execution

91

A number of things suggests sensor-based on-line trajectory generation. In case


of an unknown or moving target position the trajectory should be generated at runtime as the robot closes in on the target and the robots position relative to the target
becomes known. In Corke (1996) this position problem is considered as a control
task: perceived errors generates a velocity command (steering) which moves the
robot toward the target. This is similar to steering a car. It is not necessary to know
exact positions of the target provided a sensor can observe relative position information (like vision can). An example of this approach is Morin et al. (1998b) or
Tsakiris et al. (1996) where the stabilization of a mobile robot to a given posture
is based on relative positional information. The control system generates steering
commands like forward velocity and turning rate. Such an approach is also the
subject of section 4.3.
Fast motion requires utilization of the maximum allowable actuator torque. At the
limit of the actuators capacities there is no margin left to handle modelling errors
or disturbances and the robot may as a result deviate from the nominal path. Dahl
(1992) suggests a method for modifying the velocity profile when saturation occurs. Off-line a trajectory generator calculates a history of references minimizing
time for the motion given actuator constraints and system dynamics. This results
in a sort of bang-bang control: full acceleration followed by full deceleration for a
least one of the torques. At run-time, saturation due to modelling errors or disturbances will cause a Path Velocity Controller (PVC) to lower the velocity along the
trajectory. Basically, an algorithm scales the trajectory time such that the robot can
maintain the path. The scheme is depicted in Figure 3.2.

Planning
trajectory (list of references)
controller and actuating device

events

reference

PVC

Controller

saturation

measurements

Saturation

Robot

Sensors

Realtime and closedloop

Figure 3.2. The path velocity controller (PVC). The feedback from the controller to the
PVC module modifies the reference update rate upon saturating actuators.

92

Chapter 3 / Robot Controller Architecture

Tarn et al. (1994) and Tarn et al. (1996) suggest a path-based approach where the
time-dependency in the trajectory is replaced with a path-dependency making the
trajectory event-based. The execution of the trajectory follows the actual position
of the robot along the path. This way a robot is capable of avoiding an obstacle
without replanning the path and without having to catch of with a reference far
ahead of it. The trajectory waits for the robot, so to speak.

3.3 General Architecture


The general architecture presented in this section is based on the discussion in the
previous section.
Figure 3.3 shows a traditional planning and control scheme. It consists of a standard feedback loop around the controlled system. The reference to the controller is
provided by the execution module which simply at some predefined rate picks the
next reference in the trajectory table. The planning module calculates the time history of references. Occasionally, there can be a feedback from some sensor system
reporting events like mission accomplished or replanning needed.
This scheme has a number of drawbacks. First of all, since controller saturation
cannot be detected, the scheme relies on the planned motion being sufficiently
nice. Otherwise, we could experience windup in the reference, so to speak. That
is, the trajectory executor continues to supply new references ahead of the robots
criteria

Generation
trajectory (list of references)
events
reference

Execution

Controller
measurements

Realtime

Robot

Sensors

Closedloop

Figure 3.3. Traditional off-line trajectory generation and non-sensorbased execution.

3.3 / General Architecture

93

actual position which could keep the actuator in saturation and cause the tracking
error to grow. Therefore, we need an extra feedback loop from the controller to the
execution module in order to be able to detect saturation and to act on it by for
example slowing down the execution rate.
Furthermore, we should add a feedback loop from some sensor system to the execution module to recalculate/replan/adjust the reference depending on how close
the robot is to the target position or path. This can be seen as an on-line trajectory
control. Finally, the planning module must have status information from the system
and environment regarding the mission, obstacles, etc.
These suggestions lead to the scheme in Figure 3.4. We could further introduce
a feedback from the execution module directly to the planning module. However,
this is already accomplished through the sensor system.
The following describes the two main modules (generation and execution) in the
general trajectory planning and executing architecture (Figure 3.4).

criteria
events

Trajectory
Generation
trajectory
events

measurements

controller and actuating device

reference

Execution

Controller

constraints and saturation

Sat

Robot

Sensors

actuator saturation
motion observations

Realtime and closedloop

Figure 3.4. Overview of general generation/execution scheme for sensor- and event-based
trajectory control.

94

Chapter 3 / Robot Controller Architecture

Trajectory Generation
This module is traditionally an off-line trajectory planner. In case of new information regarding target or obstacles the module is re-executed.
Task

Description

Initialization:

task criteria

Input:

observations of the environment (obstacles), initial and


target position, velocities, mission status

Action:

calculates smooth history of references or simply defines


the target of the motion along with constraints on the
trajectory

Output:

history of references, target

Sampling time:

event-based

Sampling:

occurs when mission accomplished or replanning needed

Execution
For each controller sampling instance this module provides a reference to the controllers based on the state of the system and the controllers.
Task

Description

Initialization:

history of references

Input:

saturation in controllers, motion observations

Action:

changes the reference itself or the update rate based on the


system state, saturation in actuators. May switch to
semi-autonomous control while for instance passing an
obstacle

Output:

reference to motion controllers

Sampling time:

similar to the sampling time in the controllers. May be


interrupted by the planning module

Sampling:

constant

3.4 / Architecture for Mobile Robot

95

3.3.1 Considerations
Some bumpless transfer problems exist when using the before described trajectory control scheme. Upon replanning a new trajectory in the trajectory generation
module it is desirable to have a smooth transfer from the old trajectory to the new
one. This is easily done by stopping the robot before switching but this approach
could be to slow or some trajectory planning schemes may not allow zero velocities (for example a path following unit for a mobile robot). A change in the control
task (for example from path following to posture stabilization for a mobile robot)
may likewise cause unwanted transients and finally, the sampling of the execution
module can change from one rate to another due to for example switching from
encoder readings to camera measurements.
The extra feedback loops in the trajectory control scheme prompts some theoretical
questions (Tarn et al., 1996):
1. How does the loops affect the stability of the system?
2. How does the loops affect the performance of the system?
3. How does one design such a system to achieve a desired performance?
The answer to or treatment of these questions depends to a large extent on the
actual system and its application. Chapter 4 considers how to handle constraints in
a path following mobile robot.

3.4 Architecture for Mobile Robot


This section deals with the layout of a trajectory control systems for mobile robots.
First, we identify some classic motion control tasks in mobile robotics. Table 3.3
shows four different ways of controlling a robot.
In general, the task of tracking has time constraints while the task of following
is unconcerned with the time base. For both of these the velocities must be nonzero while in stabilization the velocities tend to zero. Depending on the motion
task different strategies for dealing with constraints and unexpected events must be
applied.

96

Chapter 3 / Robot Controller Architecture

Control Problem

Description

Point tracking

A given fixed point P on the mobile robot tracks a


moving reference position (x r (t); yr (t)). The
orientation error is unimportant

Posture tracking

The robot tracks a virtual robot reference position


(xr (t); yr (t)) and orientation  r (t) with nonzero
velocities

Path following

Given a path in the xy -plane and the mobile


robots forward velocity v , assumed bounded, the
robot follows the path only using the angular velocity
of the robot ! as control signal

Posture stabilization

The posture (position and orientation) of the robot is


stabilized at a given reference, final velocities are
zero.

Table 3.3. Motion control tasks in mobile robotics.

Figure 3.5 shows the layout of a trajectory control scheme for motion control of a
mobile robot. The robot consists of two motors for driving the wheels1 and one or
two castor wheels. The motors are controlled by the motor controllers. The robot is
equipped with a variety of sensors such as encoders, camera, ultra sound, and laser
range scanner. A sensor fusion and data processing module extracts information
from the sensors. The intelligence of the robot is placed in the task and trajectory
planning module and the execution module. The following lists details on the different modules but of course this is very dependent on the robot, the application,
and the nature of the environment.
Trajectory generation, (task and path planning)
This module generates a specific trajectory needed to carry out a motion
based on criteria such as time optimization, initial and target positions, limits
on velocities, etc. External events such as mission accomplished may call
for a new motion or a replan. Special care should be taken to make transfer
between two trajectories smooth and bumpless. The output from the module
depends on the application. In case of a path following motion it would be
1

Some mobile robot configurations uses more than two motors but here we assume a so-called
unicycle configuration with two motors and one or two castor wheels. See de Wit et al. (1996) for a
detailed discussion of different configurations.

3.4 / Architecture for Mobile Robot

97

obstacles, trajectory progress, visual land marks, localization

Reasoning

Execution

eventbased

sampled

(mission accomplished,
(new obstacle, ...)

Task planning
Trajectory generation
Obstacle avoidance

path, velocity,
target, constraints

events

Task dependent:
absolute posture
distance and orientation errors
relative posture to target

path following
posture stabilization
path tracking
point tracking
velocity scaling

Perception
sensor fusion
data processing
modelling
deadreckoning
localization

encoder readings

saturation
forward and angular velocity references

data

Sensors

Motor controllers
sampled, fast
motor input
velocity controllers
AWBT compensation

Camera
Laser scanner
Encoders
Bumper
Line scanner

Figure 3.5. General overview of sensor- and event-based trajectory control on mobile
robots.

the path and the velocity profile. In case of a posture stabilization the output
would just be the target position or in case of an inaccurately known target
position, an initial value of the target to search.
Trajectory execution
This module is part of the real-time closed-loop control system and supplies
references to the motor controllers. The reference update rate or the references itself may be altered in case the actuators saturate or updated information on the target position and obstacles (that may change the path to avoid
collisions). In some cases exceptions will have to go through the trajectory
generation module over even further up the control system hierarchy (task
planning module, teleoperation, etc) for a complete replanning but the idea
is to have the execution module handling basic exceptions itself. An example
could is obstacle avoidance. During the actual avoidance, the robot may be
semi-autonomously teleoperated or running an autonomous recovering plan
but there is no need to abort the original plan since the executor will track the
progress of the avoidance and will be ready to continue the motion task. The

98

Chapter 3 / Robot Controller Architecture


module is sampled with same sampling frequency as the motor controllers
but may be interrupted from the trajectory generator.

Motor controllers
Here, the low level control signals to the robots motors are calculated. This
may be with or without some anti-windup strategy. Typically, the references
are angular and linear velocities (or equivalent: left and right wheel velocities). The controllers take readings from the encoders as input.
Sensor fusion, data processing
Sensor outputs are processed in this module. This may include image processing, data association and sensor fusion. Outputs may be absolute position
estimates, relative positions to targets, tracking errors (distance and orientation error), environment information such as obstacle positions, etc.
The benefits from sensor-based trajectory control are obvious but the realization
of such an architecture is difficult. This treatment of trajectory control in mobile
robots continues in the case study in chapter 4 and will in particular look at some
specific solutions to path following and posture stabilization.

3.5 Summary
This chapter has discussed sensor-based closed-loop trajectory planning and execution and illustrated an architecture for such a system. In section 3.1 the base was
made in terms of a description of elements of a robot control system. From this,
trajectory generation and execution was considered in section 3.2. Pros and cons
for off-line versus on-line trajectory control was pointed out. Section 3.3 summed
up the discussion in a general architecture for trajectory control. Finally, section 3.4
focused on a mobile robot and special considerations regarding such a vehicle.
The chapter has however not provided guidelines for the actual design of the algorithms used in the new trajectory feedback loops. The following chapter looks to
some degree at that problem.

Chapter

Case Study: a Mobile Robot

Mobile robotics have in the past decades been a popular platform for testing advanced control solutions covering areas like motion control, sensor fusion and management, navigation, and map building, to name a few. This interest is triggered by
the countless challenges encountered by researchers when implementing mobile
robotic applications. The desired autonomous operation of the vehicle necessitates
multiple sensors, fault-tolerant motion control systems, and reasoning capabilities.
The mobility itself induces new annoyances (or stimuli depending on how you look
at it) since the environment is most likely to be dynamic and partially unstructured.
This calls for collision avoidance, extended perception skills, and makes task planning and execution difficult.
Where the discussion in chapter three was a fairly general description of a robotic
system and in particular trajectory control, this chapter goes into details with a
number of practical problems and solutions in mobile robotics. This include design,
implementation, and experimental verification. The chapter considers the following
aspects:
Auto-calibration of physical parameters
Section 4.2 investigates how on-line calibration of parameters such as the

100

Chapter 4 / Case Study: a Mobile Robot


encoder gains and the wheel base can reduce the drifting of the posture (position and orientation)1 estimate and extend the period of time where the
robot for example can perform blind exploration, that is manoeuvering without external position information. The calibration is based on the existing
sensor package and appropriate filters.

Path following with velocity constraints


Section 4.3 applies a receding horizon approach to the path following problem. The controller presented is capable of anticipating non-differentiable
points such as path intersections and take the necessary precautions along
with fulfilling velocity constraints on the wheels.
Posture Stabilization with Noise on Measurements
The nonholonomic constraint on an unicycle mobile robot necessitates particular motion control algorithms and for posture stabilization these algorithms turn out to be especially sensitive to noisy measurements. Section 4.4
investigates the noises influence on the stabilization and suggests a nonlinear time-varying filter that helps reduce the noise impact.
But first a description of the mobile robot.

4.1 The Mobile Robot


The mobile robot of interest is a so-called unicycle robot which has two differentialdrive fixed wheels on the same axle and one castor wheel, see Figure 4.1. A robot
with this kind of wheel configuration has an underlying nonholonomic property
that constrain the mobility of the robot in the sideways direction. This adds significantly to the complexity of the solutions to various motion control problems for
the robot.
A mobile robot of this type is currently in use at the Department of Automation
and is equipped with a sensor package consisting of a camera among other things
capable of detecting artificial guide marks for positional measurements, optical
encoders on the driving wheels measuring wheel rotations, and a laser range scanner providing distance measurements with a field of view of 180 . A thorough
1

The position and orientation of a vehicle is in the following referred to as the posture. In the
literature, the term pose is also used.

4.1 / The Mobile Robot

101

vl

2rl

Camera

Guide mark

Pan/tilt

Laser scanner

vr

Communication links

Ultrasonic sensors

Castor

2rr

Power supply

Computer rack

Wheel encoders

x
(a) Posture and velocity definitions

(b) The departments self-contained vehicle

Figure 4.1. The mobile robot.

overview of sensors and sensor systems for mobile robots is found in Borenstein
et al. (1996).
The posture of the mobile robot is given by the kinematic equations

x_ = v cos 
y_ = v sin 
_ = !;

(4.1)

where (x; y ) indicate the position of the robot center in the Cartesian space and
 is the orientation or heading angle of the robot (angle between the x-axis and
forward velocity axis of the robot). The inputs are the heading or forward velocity
v (defined as v = x_ cos  + y_ sin ) and the angular velocity !. Combined, the
triplet (x; y;  )T defines the posture of the robot. See Figure 4.1 for details.
The kinematic model (4.1) is easily sampled with the assumption of constant inputs
v and ! during the sampling periods,

102

Chapter 4 / Case Study: a Mobile Robot

sin

T ! (k )

T
T v(k) cos (k) + !(k)
x(k + 1) = x(k) + T
2
! (k )
2



T
sin !(k)
T
y(k + 1) = y(k) + T 2
T v(k) sin (k) + !(k)
2
!(k)
2
(k + 1) = (k) + T !(k);
where

denotes the sampling time. The correction factor

sin T
2

!(k))

T !(k)
2

(4.2)

is often set

to 1 which is justified by the sampling rate being high compared to the angular
velocity. The model (4.2) is referred to as the odometry model since an odometer
is known as a device that records the accumulated distance traveled by a vehicle.
The control algorithms discussed in the following are velocity reference generators
that generates references to underlying velocity motor controllers. The dynamics
of the motors and the controllers are presumed adequately fast compared to the
dynamics of the generators such that, from the generators point of view, it can
be assumed that the vehicle follows the references momentarily. In designing the
velocity generators the dynamics of the motors and the motor controllers are hence
neglected.
This, of course, is not the situation in real life and it may in fact cause unexpected
limit-cycles in the resulting trajectories or lead to instability. If this is the case the
so-called backstepping control methodology might be worth considering where, in
short, the kinematics are backstepped into the dynamics of the robot. The first step
is to design the velocity generator neglecting the faster dynamics of the motors
and the robot. In the second step the unrealistic assumption of momentarily reference tracking is reconsidered and low-level motor controllers are designed. See for
example Fierro and Lewis (1997) for such an approach.

4.2 Localization with Calibration of Odometry


A more thorough discussion of the following has been published in Bak et al.
(1999).
In mobile robotics knowledge of a robots posture is essential to navigation and
path planning. This information is most often obtained by combining data from
two measurements systems: (1) the well-known odometry model (Wang, 1988)
based on the wheel encoder readings and (2) an absolute positioning method based
on sensors such as a camera (Chenavier and Crowley, 1992; Murata and Hirose,

4.2 / Localization with Calibration of Odometry

103

1993), a laser range finder (Cox, 1989) or ultrasonic beacons (Kleeman, 1992). The
tool for fusing data is usually a Kalman filter (Larsen, 1998; Bak et al., 1998).
The need to combine two measurement systems, a relative and an absolute, comes
from the use of the efficient indispensable odometry model which has the unfortunate property of unbounded accumulation of errors. Typically, these errors are
caused by factors such as irregular surfaces, inaccurate vehicle specific physical
parameters (wheel radii, wheel base, gear ratios), and limited encoder sampling
and resolution. See Borenstein and Feng (1995, 1996) for a thorough discussion of
the subject. Obviously, it is beneficial to make either of the measurement systems
as accurate as possible. Given a reliable precise odometry system the robot can
increase operation time in environments where absolute measurements for some
reason are lacking, and fewer often time-consuming absolute measurements are
needed allowing for enhanced data processing or reduced costs. Likewise, precise
absolute measurements provide a better and faster correction of the posture and
again fewer measurements are needed.
Borenstein and Feng (1996) describe a procedure to calibrate the physical parameters in the odometry model. The method uses a set of test runs where the mobile
robot travels along a predefined trajectory. Given measurements of initial and final postures of the robot, a set of correction factors are determined. The procedure
has proven to be precise and straightforward to carry out in practice but it is also
found to be rather time-consuming as the suggested trajectory for the experiment
is 160 m long (10 runs of 16 m). Furthermore, it relies on precise measurements
as the experiment only gives 10 initial and 10 final measurements from which the
calibration information is extracted.
In Larsen et al. (1998) an augmented extended Kalman filter estimates the three
physical parameters in the odometry model along with the posture estimation. This
allows for a more automated calibration procedure which can track time-varying
parameters caused by for example payload changes. Unfortunately, the observability of the parameters is poor and the calibration quality relies strongly on the chosen
trajectory.
The following presents a two step calibration procedure based on the filter in Larsen
et al. (1998). Step one determines the average encoder gain while step two determines the wheel base and the left (or right) encoder gain during which the average
value is maintained. This gives far better observability of the parameters and still
allows for automation of the procedure.

104

Chapter 4 / Case Study: a Mobile Robot

4.2.1 Odometry and Systematic Errors


Let the posture of a mobile robots center with respect to a given global reference
system be described by the state vector z = (x; y;  )T . Given a two-wheeled mobile robot with encoders mounted on each motor shaft of the two drive wheels the
odometry model is useful to describe the propagation of the state. Based on (4.2)
we have

x(k + 1) = x(k) + u1 (k) cos ((k) + u2 (k))


y(k + 1) = y(k) + u1 (k) sin ((k) + u2 (k))
(k + 1) = (k) + 2u2 (k);

(4.3)

where input u1 equals the translational displacement while input u2 equals half the
rotational displacement, both since previous sample. The inputs are functions of
the encoder readings and the physical parameters of the robot. The definitions are

kr r (k) + kl l (k)
2
kr r (k) kl l (k)
u2 (k) =
;
2b
u1 (k) =

(4.4)

where r (k ) and l (k ) denote the encoder readings, kr and kl the gains from
the encoder readings to the linear wheel displacements for right and left wheel
respectively and b the wheel base of the vehicle. The encoder gains kr and kl are
defined as

kr =

rr
r
; kl = l ; [m/pulse]
hr Nr
hl Nl

with h the encoder resolution [pulses/rad],


wheel, and r the wheel radius.

(4.5)

the gear ratio from motor shaft to

From the odometry model the accumulation of errors is easily seen. Any uncertainty in u1 or u2 will be added to the previous posture estimate and will over time
cause drift.
A number of potential error sources can be identified from the model. The encoder
readings may not correspond to the actual displacement of the robot for different
reasons such as uneven floors, wheel slippage, limited sampling rate and resolution.
The nature of these errors can be classified as stochastical. The calculation of the
inputs u1 and u2 may also be erroneous if for example the physical parameters
(the wheel base and the left/right encoder gain) are incorrectly determined. The

4.2 / Localization with Calibration of Odometry

105

influence from such error sources is systematic (correlated over time and not zero
meaned) and therefore very difficult to deal with in a Kalman filter which is often
used for posture estimation in mobile robotics. The good news about systematic
errors are the fact that they originate from physical parameters and it is a matter of
proper calibration to reduce the effects.
Now, let a bar denote the true actual physical value as opposed to the nominal
value. The following will model the uncertainties in the three physical parameters
in the odometry model. The wheel base b is modelled as

b = b b;

(4.6)

and the left and right encoder gains as

kr = r kr
kl = l kl :

(4.7)

where b ; r and l are correction factors.


Now, define the average value of the true encoder gains as

ka =

kr + kl
:
2

(4.8)

As will be seen later this value is easily calibrated by a simple experiment and we
can therefore impose the constraint of a constant average gain on the estimation of
the encoder gains. From (4.7) and (4.8) we get

l =

2ka

kl

r kr

(4.9)

This implies that initial knowledge of ka allows us to estimating only two correction factors, namely b and r .

4.2.2 Determination of Scaling Error


In this section we discuss ways of measuring the average encoder gain ka by means
of a simple extended Kalman filter for doing the calibration on-board the robot.
Borenstein and Feng (1996) model ka by means of a scaling error factor that relates
the true value to the nominal. We shall adopt this approach in the following way:

106

Chapter 4 / Case Study: a Mobile Robot

ka =

kr + kl
k +k
= a r l ;
2
2

(4.10)

where a is the correction factor and must be determined.


The scaling error a is often measured be means of a straight-line motion experiment (von der Hardt et al., 1996; Borenstein and Feng, 1995). Let the robot travel
in a straight line over a given distance measured by means of the odometry model.
Call the distance sodo . Now, measure the actual driven distance sactual by tape measure or similar and calculate the scaling error as a = sactual =sodo . Of course, this
experiment relies on the vehicles trajectory being close to the presumed straight
line. One should pay attention to two things:
1. The wheel controllers ability to perform straight-line motion as sodo otherwise will be too large.
2. The uncertainties on the encoder gains should not be too great. Iterating the
calibration procedure can reduce this error source.
This experiment can be implemented on the robot in order to automate the process
of calibration as will be shown in the following. Let s(k ) be the accumulated displacement of the robots center. Now, model the propagation of s(k ) and a (k ) as
follows

s(k + 1) = s(k) + a (k)


a (k + 1) = a (k);

kr r (k) + kl l (k)
2

(4.11)

with s(0) = 0 and a (0) = 1. It is advantageous to use s(k ) as opposed to the


three-state odometry model because here we have no dependency on the initial
orientation or a given reference system.
Now, let a process noise q 2 R3 have covariance Q and add it to the encoder readings and a (k ). The linearized system matrix As (k ) and process noise distribution
matrix Gs (k ) then follows
"

1
As (k) =
0
"

Gs (k) =

kr r (k)+kl l (k)

(4.12)

a (k)kr

a (k)kl

0
:
1

(4.13)

4.2 / Localization with Calibration of Odometry

107

Given a measurement y (k ) 2 R2 of the absolute position (the absolute orientation


is not needed) define a new measurement z (k ) 2 R as:

z (k) = ky(k) y0 k2 = s(k) + e(k);

(4.14)

where k  k2 denotes the Euclidean norm and e(k ) is the uncertainty on the measurement. The quantity y0 is the robots absolute initial position either produced
as y (0) or as a mean value of several measurements, possibly from more than
one sensor. Assume

e(k ) is an uncorrelated white noise process with distribution
e(k) 2 N 0; r01 r02 .

This filter will estimate a (k ) as the robot moves along a straight line and receives
correcting measurements of the displacement. It is vital that the robot travels as
straight as possible.

4.2.3 Determination of Systematic Odometry Errors


In this section we assume that the scaling error a has been determined and we
can rely on the fact that the average value of kr and kl is correct. To simplify the
notation we assume a = 1 or equivalent that the values of kr and kl have been
corrected with a : kr = a kr;old ; kl = a kl;old .
Now, the idea is to estimate the remaining uncertainty parameters simultaneously
with an existing posture estimation system on the robot. See Bak et al. (1998) for
a description of such a system. By augmenting the system state vector z with the
two uncertainty parameters r and b we get
h

zaug = x y  r b

iT

(4.15)

with the following equations replacing (4.4):

r kr r (k) + (2ka r kr ) l (k)


2
r kr r (k) (2ka r kr ) l (k)
u2 (k) =
;
2b b

u1 (k) =

(4.16)

The dynamics of the two new states are modelled as

r (k + 1) = r (k)
b (k + 1) = b (k):

(4.17)

108

Chapter 4 / Case Study: a Mobile Robot

The process noise q 2 R4 has covariance Q and is added to the encoder readings
( r and l ) and to the two extra states (r and b ).
The linearized augmented system consists of the system matrix Aaug and the process noise distribution matrix Gaug :
"

A F
Aaug (k) =
0 I
"

(4.18)

G 0
Gaug (k) =
;
0 I

(4.19)

with details given in Appendix A. The matrices A, F and G are the non-augmented
linearized system, input and noise distribution matrices respectively while I denotes an identity matrix of appropriate dimensions.
In order to have convergence of the filter we use an absolute measurement y (k )
R 3 of the true state vector z . Model this in the following way

y(k) = Czaug (k) + e(k);


where e(k ) 2 N (0; R) and

(4.20)

1 0 0 0 0
6
7
6
C = 40 1 0 0 07
5:
0 0 1 0 0

(4.21)

The determination of r and b relies only on the knowledge of ka and the absolute
measurements. The trajectory is free to choose.

4.2.4 Conceptual Justification of Approach


This section validates the described calibration procedure through simulations using an advanced Simulink model. Based on simulations suggestions for a useful
test trajectory are made.
The employed Simulink model includes motor dynamics, encoder quantization,
viscous and coulomb friction, stiction as well as image quantization. See Nrgaard
et al. (1998) for further details.

4.2 / Localization with Calibration of Odometry

109

As measure of convergence we use the relative estimation error denoted


defined as

a (k) =

a (k)ka
ka

ka

; r (k) =

r (k)kr
kr

kr

; b (k) =

(k) and

b (k)b b
;
b

(4.22)

for the scaling, the right wheel, and the wheel base corrections factors, respectively.
For Monte Carlo experiments with a number of simulations with stochastically
determined parameters we use the average of the numerical value of (k ) denoted
S (k) and defined as

S (k) = E fj(k)jg ;

(4.23)

where E fg is the expectation. To show convergence we use the average and standard deviation of (k ) over a number of samples. These quantities we denote m(i)
and  (i), respectively, and define them as

m(i) =

M
M
1 X
1 X
(k); (i)=
((k) m(i))2 ;
i + 1 k=M i
i k =M i

(4.24)

where M is the total number of samples in a simulation.


Consequently, (k ) and
as k becomes large2 .

m(i) should tend to zero and S (k) should become small

Scaling error
This experiment is straightforward. The trajectory is straight-line driving for about
3 m with speed 0.25 m/s and a measurement every 1 second. The controllers are
sampled with a sampling time of 40 ms. In cases where measurements are available more frequently this of course speeds up the convergence rate. To justify the
approach and the convergence we first show one realization of the experiment and
continuing we show 100 simulations made with constant filter settings but stochastical physical parameters: up to a two percent initial error on b; kr and kl are allowed.
Figure 4.2 shows one realization of the experiment with a two percent initial error.
The scaling error is reduced by a factor of more than 450.
2

Note, S (k) should not tend to zero for the very reason that if a random variable x has
qexpectation

jj

zero then x has an expectation different from zero. Say, x

N is the Gaussian distribution.

2 N(0; 2 ) ) E fjxjg =

  , where

Chapter 4 / Case Study: a Mobile Robot

Dist. Error [mm]

110

2
0
2
4
0

10

20

10
Time [secs]

20

a(k) [%]

0
1
2

Figure 4.2. One realization of the scaling correction factor estimation. Initially, a
relative estimation error exists. After the experiment, the error is 0:004%.

2%

Figure 4.3 shows results from the 100 Monte Carlo experiments where (a) gives the
average trajectory of the estimation error while (b) shows initial and final values of
sa (k). The drift on the estimate of the distance is due to the robot not traveling in
a straight-line caused by inaccurate encoder gains (we only estimate the average)
and the controller dynamics. The second step of the procedure will correct this mismatch. For all simulations we have good convergence as the average and standard
deviation over the last 50 samples are

ma (50) = 0:0073%; a (50)= 0:0002%:

(4.25)

Observability and trajectory recommendation


Before turning to the systematic odometry errors we will look at the observability
of the two remaining parameters. The observability of the augmented filter depends
on the chosen trajectory. Determination of observability for time-varying systems
is somewhat involved and will not be pursued further here. But, given certain conditions such as slow constant speed and turning rate the linearized system matrix is
slowly time-varying and can be assumed constant for observability purposes. This

20

10

1
Sa(k) [%]

111

log (| (k)|)

Dist. Error [mm]

4.2 / Localization with Calibration of Odometry

0.5

20
Time [secs]

(a) Average errors

50
Simulations

100

(b) Relative errors

Figure 4.3. Scaling correction factor estimation with 100 Monte Carlo estimations using
a Simulink model and an up to a 2 percent initial error on the physical parameters of the
odometry model. (a) The top plot shows the average distance estimation error while the
bottom plot shows the average relative scaling estimation error S a (k ). (b) Initial (cross)
and final (square) relative scaling estimation errors for the 100 simulations. In average the
scaling correction factor is improved by a factor of 54.

way the observability matrix can be calculated. For most trajectories and parameter configurations, the observability matrix will numerically have same rank as the
system matrix (which is the criterion for observability) even though the observability may be poor. Consequently, the ratio between the largest and smallest singular
value will be used to indicate the degree of observability for different trajectories.
Here three fundamental test runs are examined: (1) straight-line driving, (2) turning on the spot, and (3) driving with a constant turning and heading speed. The
averaged condition numbers for the simulations are given in Table 4.1.
This shows that turns are good for both correction factors while straight-line driving practically only provides information to the correction factor on the encoder
gains. Some mixture of the two seems rational.

112

Trajectory
Straight-line

Turning on the
spot
Constant
turning and
heading

Chapter 4 / Case Study: a Mobile Robot

Cond. no

 106
 10

 10

106

Observability
The more correct the encoder gains are
known the more perfect is the straight-line
motion and the less observable is b .
Observability for r is good.
Observability is best for b but also good for
r .
The relation between turning and heading
velocity determines the condition number:
fast turning and slow heading speed gives a
small condition number.

Table 4.1. Condition numbers for the observability matrix for different trajectories.

Systematic odometry errors


The chosen trajectory for the following experiments is shown in Figure 4.4 and
will be traveled in both directions. For real experiments this might not be a suitable
trajectory as sensors may require a certain orientation, i.e. a camera should be
pointed in the direction of a guide mark in order to obtain measurements from
images of the mark. Other trajectories will also work.

~1 m

~1 m

Figure 4.4. Trajectory for mobile robot for calibrating systematic errors. The forward velocity is kept under 0:2 m/s and the total time to complete the course in both directions is
about 50 seconds. An absolute measurement is provided every one second.

4.2 / Localization with Calibration of Odometry

113

In Figure 4.5, a simulation with the filter for estimating the systematic errors is
shown. In order to verify convergence the absolute measurements are not corrupted
with noise, although the measurement covariance matrix in the filter is non-zero.
Due to these perfect unrealistic measurements the convergence is fast and exact.
Also, as the systematic odometry errors become exact estimated, the posture estimation error is zero, even between measurements.

r(k) [%]

1
0

1
2
0

10

b(k) [%]

2
1
0

1
0

10
Time [secs]

Figure 4.5. One realization of systematic odometry factors estimation. The measurements
are noiseless. The relative odometry correction estimation errors  r (k ) and b (k ) are
shown.

Next, a more realistic experiment is conducted. The absolute measurements are


added realistic noise and the results are based on 100 Monte Carlo simulations
with stochastically determined initial estimates of the posture and the odometry
correction factors r and b . Up to a two percent error is allowed. Figure 4.6 shows
the average of the relative estimation errors for the odometry correction factors.
The convergence is good as the average and standard deviation over the last 100
samples for r and b , respectively, are:

mr (100) = 0:0003%; r (100) = 0:0007%


mb (100) = 0:0011%; b (100)= 0:0005%:

(4.26)
(4.27)

In average the correction factors have improved with a factor of 75 and 81 for r
and b , respectively.

114

Chapter 4 / Case Study: a Mobile Robot

Sr(k) [%]

0.5

0
0

50

Sb(k) [%]

0.5

0
0

50
Time [secs]

Figure 4.6. Odometry correction factors estimation with 100 Monte Carlo estimations
using a Simulink model and an up to a 2 percent initial error on the physical parameters
of the odometry model. The plot shows the average relative scaling estimation error S r (k )
and Sb (k ).

4.2.5 Experimental Results


In this section real world experiments are carried out on the departments mobile
robot, described and illustrated in section 4.1.
The mobile robots drive system consists of two DC motors mounted on the front
wheels through a gearing. They are equipped with encoders with an resolution
of 800 pulses/turn. The control and coordination of the robot are handled by an
on-board MVME-162A main board with a MC 68040 CPU running the OS-9 operating system. Two ways of acquiring absolute measurements exist:
1. On-board guide mark vision system. A standard commercial 512  512 pixel
grey level type CCD camera is mounted on top of the robot with an onboard CPU for image processing. Based on images of artificial guide marks,
absolute postures of the robot can be calculated.
2. External tracking vision system. The robot is equipped with two light diodes,
one on each side, which can be tracked with a surveillance camera placed
in the ceiling of the laboratory. The posture of the robot can be monitored

4.2 / Localization with Calibration of Odometry

115

with suitable image processing and knowledge of the cameras translation


and rotation relative to some global reference coordinate system. This can
be achieved using calibration tiles with known geometry. The recording of
a surveillance image is synchronized with the robot which allows for conformity in the absolute measurements and local odometry data. The uncertainties of these measurements are estimated to be under 10 mm in position
and 0:15 in orientation. Due to a limited field of view of the camera, this
tracking system only allows for tracking of the robot over an area of 1.75 m
 1.75 m.
See for instance Andersen and Ravn (1990) for a discussion of real-time vision
systems.
It is only attempted to use the external vision system in the following as this system is perfectly suited for tracking the robot through rotations. Lacking knowledge of the exact parameters we can only verify convergence but not whether these
converged parameters actually are correct, although we shall make a comparison
to values determined from a calibration experiment using Borenstein and Fengs
method.

kr [ m/pulses]

Figure 4.7 shows the convergence and the robustness of the convergence: three
experiments with different initial settings have been carried out and all converge
to equal values. The convergence rate for kr is fast (about 10-15 measurements)
while b is 8-10 times slower.

22

21.8
21.6
21.4
21.2
0

25

b [mm]

550
540
530
0

50

100

150

Time [secs]

Figure 4.7. Calibration with three different initializations. Note different time-scales for
top and bottom plot.

116

Chapter 4 / Case Study: a Mobile Robot

Comparison to Existing Calibration Methods


As mentioned in the introduction at least two systematic calibration methods exist. Table 4.2 compares estimated values obtained from the experiment shown in
Figure 4.7 with a previous calibration where Borenstein and Fengs method were
used. The differences are less than 0.3% for all three. Some time (about 6 months)
passed between the two calibrations and this may explain the small differences as
the wheel configuration and the surface may be slightly at variance. No attempt
have been made to quantitatively decide which parameter set achieve the best performance on the mobile robot as the purpose here is to propose the filters and
establish a confidence in the convergence and robustness of the calibration.

kr [m/pulse] kl [m/pulse]

Method

b [mm]

Borenstein and Fengs method

21.66

21.61

538.32

Estimated with on-board filters

21.69

21.58

539.67

Table 4.2. Comparison of two different calibrations.

In Larsen et al. (1998) a method was proposed that in one step tries to estimate
the wheel base and the two encoder gains. Real world experiments equivalent to
the above described reported some difficulties estimating all three parameters at
the same time as the wheel base was almost 10% off after more than 200 vision
measurements. By splitting the procedure into two steps we only need 50-100 measurements and the robustness of the convergence is improved significantly.

4.2.6 Conclusions
A two-step procedure for calibrating systematic odometry errors on mobile robots
using a Kalman filter has been described. First step estimates the average encoder
gain for the two driving shafts while the second step estimates the wheel base
and the right wheels encoder gain. Giving a maintained average we can as well
determined the left wheels encoder gain.
By splitting the procedure into two steps observability and hence convergence rate
are much improved (see Larsen et al. (1998) for a comparable study). Monte Carlo
simulations have justified the approach and experiments have indicated a good robustness regarding convergence in the proposed filters. Furthermore, the accuracy

4.3 / Receding Horizon Approach to Path Following

117

of the estimated values are of the same order as calibrations using existing manual
procedures.
The two filters are easy to implement and the method is well suited for automating the calibration task allowing for more autonomous vehicles. It is especially
applicable to vehicles with frequent load or wheel configuration changes.

4.3 Receding Horizon Approach to Path Following


In manoeuvering mobile robots path following is a useful motion control approach
to getting the robot from one point to another along a specified path. This section
considers a path consisting of straight lines intersected at given angles. In an indoor
environment, where rooms often are rectangular, this kind of paths are often encountered, possibly with perpendicular intersections. Figure 4.8 shows an example
of a buildings floor plan and a possible path layout. Furthermore, the specification or actual implementation (by means of for instance tape lines or floor wiring)
of such a straight path is easy. An appropriate path following algorithm should
therefore master the ability of controlling the robot through such intersections.

000000000000
111111111111
111111111111
000000000000
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111

000000000000
111111111111
111111111111
000000000000
111111111111
000000000000
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111
0000000
1111111

000
111
1
0
000000000000
111111111111
111
111111111111
000
1
0
000000000000
111
111111111111
000
000000000000
000000000000000000000000000000000
111111111111111111111111111111111
111111111111111111111111111111111
000000000000000000000000000000000
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111
000000000000000000000000000000000
111111111111111111111111111111111

Path

Robot

0
1
0
1
0
1
0
1
0
1
111111111111
000000000000
0
1
000000000000
111111111111
000000000000
111111111111
000000000000
111111111111

1111111111111111111111
0000000000000000000000
0000000000000000000000
1111111111111111111111
1111111111111111111111
0000000000000000000000
1111111111111111111111
0000000000000000000000

00
11
11
00
00
11

1
0
1
0

00000000
11111111
11111111
00000000
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
000000000000000000000000000
111111111111111111111111111
00000000
11111111
000000000000000000000000000
111111111111111111111111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111
00000000
11111111

000000000000000000000000000000000000000
111111111111111111111111111111111111111
000000000000000000000000000000000000000
111111111111111111111111111111111111111
111111111111111111111111111111111111111
000000000000000000000000000000000000000
000000000000000000000000000000000000000
111111111111111111111111111111111111111
000000000000000000000000000000000000000
111111111111111111111111111111111111111
000000000000000000000000000000000000000
111111111111111111111111111111111111111
000000000000000000000000000000000000000
111111111111111111111111111111111111111
000000000000000000000000000000000000000
111111111111111111111111111111111111111

Figure 4.8. Floor plan for an illustrative building.

Sharp turns raise a problem. The vehicle velocities, heading and angular, must
be constrained such that the turn is appropriately restrained and smooth. A large
heading velocity together with a large angular velocity will jeopardize the stability

118

Chapter 4 / Case Study: a Mobile Robot

and safety of the robot or cause saturation in the motors which again will cause
overshooting and long settling times. The velocity constraints can either be selfimposed due to desired vehicle behavior and safety concerns or physical due to
actual limitations caused by for instance currents and voltages in the motors.
To avoid excessive overshooting and to have time to decelerate when turning, the
presented controller is based on a strategy that forecasts the intersection using a
receding horizon approach where the controller predicts the posture of the robot
and together with knowledge of an upcoming intersection compensates the control signals. Predictive path planning was discussed in Normey-Rico et al. (1999);
Ferruz and Ollero (1998) where smooth paths were considered.
The general path following problem is characterized by the forward velocity not
being part of the control problem opposed to the path tracking problem where
typically a virtual reference cart is tracked (de Wit et al., 1996; Koh and Cho, 1999;
Samson and Ait-Abderrahim, 1991) and both the forward and angular velocity are
controlled. Hence, path following has an extra degree of freedom (but only controls
two degrees) which allows handling of constraints by scaling the forward velocity.
This has been exploited in Bemporad et al. (1997) for a wall-following mobile
robot. In Koh and Cho (1999) the constrained path tracking problem was discussed.
At first, the problem of following a straight line without turns is considered. A linear controller is presented that handles constraints by means of a simple velocity
scaling. Next, a nonlinear receding horizon approach to the general path following problem is considered. The resulting controller cannot be solved explicit and
will have to rely on an on-line minimization of a criterion function. This is timeconsuming, even without constraints. As a consequence of that, a simplified faster
linear but approximative approach is presented. By using the velocity scaling, the
constraints are handled in a simple way. The section concludes with experimental
results.
A more condensed version of the research in this section is to appear in Bak et al.
(2001).

4.3.1 Path Following Problem Formulation


Given a path P in the xy -plane and the mobile robots forward velocity v (t), the
path following problem consists in finding a feedback control law ! (t) such that
the distance to the path and the orientation error tends to zero. Thus the traveled
distance along the path is not itself of interest to this problem.

4.3 / Receding Horizon Approach to Path Following

119

The path following problem is illustrated in Figure 4.9 where P is the orthogonal
projection of the robot point R onto the path. The signed distance between P and
R is denoted d. An intersection is placed at C and the signed distance from C to
P along the path is denoted s. The orientation error is defined as ~ =  r ,
where r is the orientation reference. The two path sections have orientation 1
and 2 , respectively, with j2 1 j <  assumed. Shown are also two bisection
lines defined by the angles = 2 2 1 and + 2 . These lines will later be used to
determine at what point the reference should change.

2
d = s tan ( )
d = s tan + 2

(xc ; yc )

Castor

1

x
Figure 4.9. The path following problem with a path consisting of two straight intersected
lines.

4.3.2 Constraints
Constraints exist at different levels of a control system for a mobile robot. At the
motor level, voltages and currents are magnitude limited, and at trajectory level,
the same goes for velocities and accelerations. Since the path following algorithm
is a velocity trajectory generator that generates references to the underlying motor
controllers, only constraints on the velocities of the robot are considered. This is

120

Chapter 4 / Case Study: a Mobile Robot

partly justified by the fact that, typically, the motors of a mobile robot are capable
of delivering larger velocities than desirable during normal operation. Hence, velocity constraints are often imposed and hence magnitude saturation in the actual
actuators (the motors) are generally not of concern, except when fast high performance trajectory generators are designed. Furthermore, only magnitude saturations
are considered.
Let u = (v; ! )T and uw = (vr ; vl )T , where vr and vl denote the right and left
wheel velocities, respectively. Discarding wheel slippage and uneven floors, the
heading and angular velocities relate to the left and right wheel velocities in the
following way
" #

v
u=
=
!

"

1
2
1

#" #

1
2
1

vr
= Fw u w ;
vl

(4.28)

w and vw be the maxwhere b is the length of the wheel base of the robot. Let vmax
min
imum and minimum allowable velocities for the left and right wheels (we assume
equal constraints on the two wheels), i.e.
w  v  vw ; vw  v  vw :
vmin
r
l
max
min
max

(4.29)

Besides the wheel constraints it is convenient to be able to put a limit directly on


the forward and angular velocities. For example, for v = 0, the constraints on the
wheels alone may allow an undesirable maximum angular velocity. The constraints
are

vmin  v  vmax ; !min  !  !max :

(4.30)

Figure 4.10 illustrates how the constraints relate. It is assumed that zero belongs to
the set of valid velocities. In combined compact form the velocity constraints are
2

w )
Fw 1
12 (vmax
6
7
6
w )7
6
6 12 (vmin
7
Fw 1 #7
6"
7
6
7
6 0 1 7
6 !
7
max
6
7
6
7
6
7u  6
7;
6 0
6
1#7
!min 7
6"
7
6
7
6
7
6
7
1
0
v
4
5
4
5
max
1 0
vmin

or

(4.31)

4.3 / Receding Horizon Approach to Path Following

121

v
w
vmax

vmax

!
w vmax
w
vmin
bagv

w vmin
w
vmax
bagv

vmin

w
vmin

!min

!max

Figure 4.10. The velocity constraints.

P u  q;
where 1m (x) = (x; x; : : :

; x)T

(4.32)

is a vector of length m.

Some sensors may require a certain orientation of the robot towards the medium. If,
for example, an ultrasonic sensor is used in a wall-following robot, the difference in
orientation between the wall and the surface of the receiver should be kept small for
the sensor to provide reliable measurements. Hence, a restriction on the orientation
error of the robot may, depending on the sensor package, be needed:

j~j  ~max:

(4.33)

However, this will not be pursued any further at this point.

4.3.3 Linear Controller with Velocity Scaling


In this section the path is assumed perfectly straight and infinitely long, that is,
the turns are not under consideration. We consider a velocity scaling approach to
a standard linear controller such that the constraints are satisfied. In Section 4.3.5
the method will be applied to a receding horizon controller.
The velocity scaling was implicitly introduced in Dickmanns and Zapp (1987) and
further explored in the control context in Sampei et al. (1991) and Samson (1992).

122

Chapter 4 / Case Study: a Mobile Robot

Linear Controller
When the path is straight a nonlinear parameterization of the path following problem is

d_ = v sin(~)
~_ = !;

(4.34)

where d and ~ are the distance and orientation errors, respectively. In the neighborhood of the origin (d = 0; ~ = 0), a linearization of (4.34) gives

d_ = v~
~_ = !:

(4.35)

Assuming that v is different from zero (but not necessarily constant), this system
(4.35) is controllable and stabilizable when using a linear state feedback controller
of the form

~
! = l1 vd l2 jvj;

(4.36)

with l1 > 0 and l2 > 0. For a constant v , this controller reverts to a classical linear
time-invariant state feedback. The velocity v is included in the controller gains
such that the closed-loop xy -trajectory response is independent of the velocity of
the vehicle. As will be demonstrated, the gains l1 and l2 are chosen with respect
to the distance response instead of the corresponding time response in the case of
time equations. For a given v , consider the closed-loop equation for output d:

d + l2 jvjd_ + l1 v2 d = 0;

(4.37)

where we identify the undamped natural frequency !n and the damping ratio  as
p

!n = jvj l1
l
 = p2 :
2 l1

(4.38)

For a second-order linear system, the transient peak time (time from reference
change to the maximum value) tpeak is a function of the natural frequency !n and
the damping ratio  :
!

 cos 1 ( )
1
tpeak = exp p
; 0   < 1:
!n
1 2

(4.39)

4.3 / Receding Horizon Approach to Path Following


Define the peak distance as dpeak
ing the gains l1 and l2 as

123

= jvjtpeak . Thus, (4.38) and (4.39) suggest select-

0
B

l1 = B
@

exp

p
cos

dpeak

1 ( )

 12

2

C
C
A

(4.40)

l2 = 2 l1 :
Nonlinear Extension
de Wit et al. (1996) suggest the following extension to the linear controller (4.36)
that globally stabilizes the nonlinear model (4.34):
(

!=

l1 vd;
~ = 0
~) d l jv j;
~ otherwise.
l1 v sin(2
2
~
2

(4.41)

Note, that the linear controller (4.36) and the nonlinear controller (4.41) behave
similarly around (d = 0; ~ = 0).
Velocity Scaling
The controller (4.36) only determines the angular velocity ! while the forward velocity v is left to the operator to specify. This extra degree of freedom and the fact
that for the controller (4.36) ! ! 0 for v ! 0, allow us to handle the velocity
constraints by scaling the forward velocity such that v = vdes , where vdes is the
desired velocity of the vehicle and 2 [0; 1] is a scaling factor. This way the constrained xy -trajectory will remain the same as the unconstrained, only the traverse
will be slower.
For a given distance error
have the form

d and orientation error ~, the scaled control law (4.36)


! = k(d; ~) vdes ;

(4.42)

~
k(d; ~) = l1 d + l2 sign(vdes ):

(4.43)

with

124

Chapter 4 / Case Study: a Mobile Robot

We need to determine the scaling factor


satisfied,
"

P
Since P 0 is a vector and


such that the following inequality is

1
vdes = P 0  q:
k(d; ~)

(4.44)

2 [0; 1], the inequality is satisfied by setting






(q )
= min 1; 0 i ; i = j (P 0 )j > 0; j = 1; : : : ; 8 :
(P )i

(4.45)

The selection of can be interpreted as a time-varying velocity scaling and by


on-line determination it is guaranteed that the constraints on the velocities are not
violated.

4.3.4 Nonlinear Receding Horizon Approach


This section considers the problem of manoeuvering a nonholonomic mobile robot
through a turn consisting of two straight-line sections by means of a sensor-based
nonlinear closed-loop receding horizon control strategy.
The linear feedback control solution to the path following problem as shown in the
previous section 4.3.3 is not suited for path intersections. As a matter of fact most
path following solution are based on the assumption of a bounded and differentiable path curvature which is not the case for an intersection.
A good solution to the stated path following problem should have the succeeding
properties:




It seems naturally to anticipate the corner and to embark on the turn before
reaching the actual turning point. This will smooth the turn.
The forward velocity should be decreased (possibly to zero) as the vehicle
rotates around the corner such that the vehicle may follow the path with an
arbitrarily accuracy.

The anticipation of the corner suggests a receding horizon approach where the
control signals are based on predictions of the robots posture while the decrease
in velocity suggests the use of scaling.

4.3 / Receding Horizon Approach to Path Following

125

y
Path

Castor

Figure 4.11. Definition of path following problem for nonlinear approach.

Figure 4.11 shows a local coordinate system (x; y ) to the considered intersection.
With respect to this coordinate system the following model (based on the simplified
odometry model in (4.2)) describes the motion of the vehicle,

x(k + 1) = x(k) + T v(k) cos((k) + 0:5T !(k))


y(k + 1) = y(k) + T v(k) sin((k) + 0:5T !(k))
(k + 1) = (k) + T !(k);

(4.46)

where the vehicle orientation  is measured with respect to the x-axis. The intersection is placed in (x; y ) = (0; 0) and without loss of generality a left turn is
considered. The intersection angle is set to 90 . Shown in the figure are also the
bisection lines defined by the angle = 2r which defines the switch in reference.
The predictive receding horizon controller is based on a minimization of the criterion

J (k ) =
^

N2
X
n=1

d^(k + n)2 + ^~(k + n)2 + !(k + n 1)2 ;

(4.47)

where d^ and ~ are the predicted distance and orientation errors, and  are weights,
and N2 is the prediction horizon. We are looking for a solution to the path following
problem where the distance traveled along the path itself is not of interest. Due to

126

Chapter 4 / Case Study: a Mobile Robot

the intersection two different error sets are needed depending on where the vehicle
is. The first error set is used along the incoming path section while the second error
set is used when the vehicle travels along the outgoing section. The switch from
one error set to another is defined in Figure 4.11 by the two bisection lines. The
symbols and illustrate in which area in the neighborhood of the intersection
each error set is used. The error sets are defined as follows:
!

8
>
>
>
>
<

d^(^x; y^)
^~(^x; y^) = >
>
>
>
:

y^
;
^ !
x^
;
^ 2

y^ < x^
or y^ > x
^

otherwise

(4.48)

The prediction of the posture is easily calculated by extrapolating the nonlinear


model in (4.46). For example

^(k + 2) = ^(k + 1) + T !(k + 1)


= (k) + T !(k) + T !(k + 1)
x^(k + 2) = x^(k + 1) + T v(k + 1) cos(^(k + 1) + 0:5T !(k + 1))
= x(k) + T v(k) cos((k) + 0:5T !(k))
+ T v(k + 1) cos((k) + T !(k) + 0:5T !(k + 1)):

(4.49)

(4.50)

Thus, the n-step predictions are given by

^(k + n) = (k) + T
x^(k + n) = x(k) + T
y^(k + n) = y(k) + T

nX1
i=k
nX1
i=k
nX1
i=k

!(i)
0

v(i) cos @(k) + 0:5T !(i) + T


0

v(i) sin @(k) + 0:5T !(i) + T

i 1
X
j =k
i 1
X
j =k

!(j )A

(4.51)

!(j )A :

Since the predictions are nonlinear and the errors are position-dependent no explicit
minimization of the criterion (4.47) exists and the controller will have to rely on
on-line minimization. This nonlinear approach is not of practical interest due to the
mandatory minimization at run-time which depending on the size of the prediction
horizon is so time-consuming that an real-time implementation is out of question.

4.3 / Receding Horizon Approach to Path Following

127

4.3.5 Linear Receding Horizon Approach


The nonlinear solution to the intersection problem presented in the previous section 4.3.4 has some run-time problems due to the on-line minimization of the criterion (4.47). The following attempts to solve the same problem by means of a
simple and fast linear predictive strategy. The presentation uses the definitions in
Figure 4.9.
The following approach is approximative and based on the linearized model (4.35)

d_ = v~
~_ = !;

(4.52)

which was used for the linear state feedback controller in Section 4.3.3. A discretized version of (4.52) with sampling period T is found by integration:


T
d(k + 1) = d(k) + T v (k) r (k) + !(k)
2
(k + 1) = (k) + T !(k);
where we have assumed _r = 0.

(4.53)

Since we eventually want to apply the velocity scaling to the receding horizon
approach, we introduce a new control signal ' defined as v' = ! where we for a
constant ' have ! ! 0 for v ! 0.
Define the state vector z (k ) = (d(k );  (k ))T and the reference vector r (k ) =
(0; r (k))T , and rewrite (4.53) to
"

"

"

T v2
0
1 Tv
'(k) +
z (k + 1) =
z (k ) + 2
Tv
0
0 1
2

Tv
r(k);
0

(4.54)

or

z (k + 1) = Az (k) + B' '(k) + Br r(k):

(4.55)

The predictive receding horizon controller is based on a minimization of the criterion


N2
X
J (k) = (^z (k + n) r(k + n))T Q(^z (k + n) r(k + n)) + '(k + n)2 ;
n=0

(4.56)

128

Chapter 4 / Case Study: a Mobile Robot

subject to the inequality constraint


"

v(n)
P
v(n)'(n)

 q;

n = 0; : : : ; N 2 ;

(4.57)

where z^ is the predicted output, Q is a weight matrix,  is a scalar weight, and N2


is the prediction horizon. P and q are defined in (4.32).

An n-step predictor z^(k + njk ) is easily found from iterating (4.54). Stacking the
predictions z^(k + njk ); n = 0; : : : ; N2 in the vector Z^ yields
2

6
Z^ (k) = 6
4

z^(kjk)
..
.

z^(k + N jk)

7
7 = F z (k ) + G' (k ) + Gr R(k );
5

(4.58)

with
h

iT

iT

(k) = '(k); : : : ; '(k + N2 ) ; R(k) = r(k); : : : ; r(k + N2 ) ;


and
h

F= I A
2

6
6
6
6
Gi = 6
6
6
6
4

   AN2

0
Bi
ABi
..
.

AN2 1 Bi

iT
3

0
0
Bi

   0 07
   0 077
..

..

..



..
.

.. 7
.7
7;
7

0 07
5
ABi Bi 0

where index i should be substituted with either ' or r .


The N2 -step predictor (4.58) simplifies the criterion (4.56) to


T

J (k) = Z^ (k) R(k)

IQ Z^ (k) R(k) + (k)T (k);

(4.59)

where IQ is a block diagonal matrix of appropriate dimension with instances of Q


in the diagonal. The unconstrained controller is found by minimizing (4.59) with
respect to :

4.3 / Receding Horizon Approach to Path Following

129

(k) = Lz z (k) Lr R(k);

(4.60)

with

Lz = ( + GT! IQ G! ) 1 GT! IQF


Lr = ( + GT! IQ G! ) 1 GT! IQ(Gr

(4.61)

I ):

Directly using z (k ) in (4.60) has a drawback. For large distance errors, the constant
gains in the controller causes unintended large orientation errors. Instead, consider
the nonlinear scheduled controller
2

(k) = Lz

k
k

sin(2 ~( ))
4 2 ~( )

05
z (k) Lr R(k):
1

(4.62)

This will reduce the control gain on d when ~ becomes large and hence reduce the
orientation error for large distance errors. This is illustrated in Figure 4.12 for a
path following controller with and without the nonlinear scheduling in (4.62).

y [m]

0.5

0
1

x [m]

Figure 4.12. Straight path following with (solid) and without (dashed) nonlinear scheduling
for an initial distance error of 1m.

The scaling approach from Section 4.3.3 can straightforward be applied. The scaling vector = ( (0); : : : ; (N2 ))T is selected such that
"

1
P
vdes (n) = P 0 (n)  q; n = 0; : : : ; N2 ;
'(k + n)
is satisfied by using (4.45).

(4.63)

130

Chapter 4 / Case Study: a Mobile Robot

Reference Estimation
The predictive controller needs a vector, r (k ) = (r (k ); : : : ; r (k + N2 ))T , with
N2 +1 future orientation references, one for each prediction, such that the reference
vector R(k ) = (r (k ); : : : ; r (k + N2 ))T can be completed with elements ri =
(0; r (i))T .
For a straight path with no intersections we simply have
h

r (k) = 1

iT

 

(4.64)

where 1 is the orientation of the path.


Now, consider the turn illustrated in Figure 4.9. A difficulty exists because the
reference vector is related to time but the turn is given by a position and orientation.
Thus, we need to determine to what sampling instance the vehicle has reached the
corner in order to relate the positional information of the corner to the reference
vector. This is done by using information such as the momentary distance to the
corner and an estimate of the vehicles velocity.
The reference change is defined as the time kstep where the distances from the
robot to the two path sections are equal, that is


d = tan( )s or d = tan +


s:
2

(4.65)

Geometrically, (4.65) defines the two bisection lines shown in Figure 4.9. At the
time kstep , the distance error d must change from being measured with respect to
the incoming path section to the outgoing path section and s should then be directed
towards the next corner, if any. At time k , define ^(k ) as kstep = k + ^(k ). The
orientation reference vector is thus given as
h

r (k) = 1^(k) (1 ) 1N2

^(k)+1 (2 );

iT

(4.66)

where 1 and 2 are given by the orientation and direction of the corner. If for
example the path is oriented along the x-axis with a left turn along the y -axis, then
1 = 0 and 2 = =2.
Since the velocity changes due to the velocity scaling, the arrival of the corner and
thus the sampling instance where reference should change, must be based on an
estimation of the robots posture. Based on the odometry model (4.2) we have

4.3 / Receding Horizon Approach to Path Following

131

T !(k)
2


T !(k)
d(k + 1) = d(k) + T v(k) sin (k) r (k) +
2
(k + 1) = (k) + T !(k):
s(k + 1) = s(k) + T v(k) cos (k) r (k) +

(4.67)

This models n-step predictor is easily found by iterating the equations (4.67) like
in (4.51)

^(k + njk) = (k) + T


d^(k + njk) = d(k) + T
s^(k + njk) = s(k) + T

nX1
i=k
nX1

!(i)
0

i 1
X

T
!(j )A
v(i) sin @(k) r (k) + !(i) + T
2
i=k
j =k
0

nX1

i 1
X

T
v(i) cos @(k) r (k) + !(i) + T !(j )A :
2
i=k
j =k
(4.68)

From (4.65) and (4.68), we can estimate ^(k ). For 2


n

1 > 0 we get


s^(k + njk)
2
o
d^(k + njk)  tan( )^s(k + njk) ;

^(k) = min n d^(k + njk)  tan +

_
while for 2

(4.69)

1 < 0 the inequality signs in (4.69) are opposite.

The Algorithm
This concludes the linear predictive receding horizon controller defined by the control law (4.62), the scaling (4.63), the estimation of ^ in (4.69), and the reference
vector (4.66). Algorithm 4.3.1 illustrates a sampled controller using this approach.
Algorithm 4.3.1 Sketch of linear receding horizon controller
L INEAR R ECEDING H ORIZON (k; R(k ))
1 z (k )
measurement
of d;3
2
~
sin(2 (k ))
05
2 (k )
Lz 4 2~(k)
z (k) Lr R(k)

132
3
4
5
6
7
8
9
10
11

Chapter 4 / Case Study: a Mobile Robot


minimum values satisfying all constraints

(k) ( )1
'(k) ()1
v(k) (k)v des
!(k) v(k)'
^(k + 1) estimate reference crossover
R(k + 1) new reference
k k+1
return ! (k ); v (k ):

Clearly, the number of calculations for each sampling is reduced significantly compared to the nonlinear approach since the controller can be solved explicit.

4.3.6 Conceptual Justification of Approach


This section verifies the usefulness of the linear receding horizon approach by
means of a simulation study.
 The parameters are chosen as T = 0:04s, N = 100,
 = 0:0001, and Q = 10 0 , with = 0:02 if not stated otherwise. The desired velocity is 0:2m/s and the prediction horizon is equivalent to detecting a corner 0.8m
before reaching it, desired speed assumed. The velocity constraints on the robot are
chosen as
wheels:
forward:
angular:

0:25
0:05


2
10

 vr;l  0:25 [m/s];


 v  0:20 [m/s];
 !   [rad/s]:
2
10

Straight Path
Consider the task of following a wall with 1 = 0 and initial starting point in
(0; 1m; 0)T . Figure 4.13 shows the trajectory and the scaled controller outputs
along with the constraints. The resulting xy -trajectory for the unconstrained and
constrained closed-loop system are equal; only the time response is different. For
the unconstrained controller, x = 1 is reached after 7.6 seconds while for the constrained controller, the time is 9.56 seconds due to the scaling.

4.3 / Receding Horizon Approach to Path Following

133

0.25

v [m/s]

y [m]

0.5

0
0

0.5

1
x [m]

(a) xy -trajectory

1.5

0.25
1

0.5

0
[rad/s]

0.5

(b) Velocity scaling

Figure 4.13. Straight path with velocity constraints and scaling.

Intersection
At first, a 90 turn is considered. Figure 4.14 shows the xy -trajectories for number
of different initial starting positions and indicates a good robustness to initial conditions. The trajectory with x0 = ( 0:5; 0:5; 0) breaks off after a short time due
to an early change in the orientation reference caused by the predicted position of
the vehicle being closer to second path section than to the first. The trajectory is
therefore in full compliance with the intended behavior.
To illustrate the scaling of the velocities and the fulfillment of the constraints Figure 4.15 display time histories for the scaled velocities (both left/right and forward/angular) for the initial position x0 = ( 1; 0; 0)T .
It is seen how the forward velocity gives way to an increase in the angular velocity
which secures a safe smooth turn while fulfilling the imposed constraints. The online estimation of ^ induces a small fluctuation in the signals. This, however, could
be reduced by low-pass filtering the estimate.
Now, consider different turns with 1 = 0 and 2 = 30 ; 60 ; 90 ; 120 , and 150 ,
respectively. Figure 4.16 shows the xy -trajectories for the different turns where the
exact same parameter setting has been used for all the turns. This demonstrates

134

Chapter 4 / Case Study: a Mobile Robot

y [m]

0.5

0.5

0.5
x [m]

0.5
0
0.5
0

10

10

0.2

v and v [m/s]

v [m/s] and [rad/s]

Figure 4.14. Path following for different initial positions.

0
0.2
Time [secs]

Figure 4.15. Scaled velocities. Top plot: forward (solid) and angular (dashed) velocity.
Bottom plot: wheel velocities v l (solid) and vr (dashed). Constraints are shown with dotted
lines.

4.3 / Receding Horizon Approach to Path Following

135

y [m]

0.5

0
1

0.5

0.5

x [m]

Figure 4.16. Path following for different turns.

a good robustness. Even for very sharp turns (2


overshooting.

 120 ) there is hardly any

Parameter Robustness
Next, we consider parameter robustness and tuning capabilities. Figure 4.17 shows
a 90 turn with different values of and  (small, medium, and large values). The
variations of the two parameters and  have similar effects on the closed-loop
xy-trajectory. In particular, for large values of  or small values of the response
tends to overlapping the path sections at all time. This is possible in spite of the
velocity constraints because the forward velocity is allowed to tend to zero at that
point. The drawback is the deceleration of the vehicle being very abrupt.

4.3.7 Experimental Results


The mobile robot test bed described in section 4.1 and 4.2.5 is used in the following
to experimentally verify the linear receding horizon approach.

136

Chapter 4 / Case Study: a Mobile Robot

0.4

0.4
y [m]

0.6

y [m]

0.6

0.2

0.2

0
0.6

0.4
0.2
x [m]

(a) Varying , fixed 

0.6

0.4
0.2
x [m]

(b) Varying , fixed

Figure 4.17. Different values of and . Small (solid), medium (dashed), and large (dashdotted) values.

To illustrate the ability of predictive turning, the vehicle is commanded to follow


a triangle path layout with turns of 90, 120, and 150 degrees, respectively. The
baseline of the triangle is 2 meters long and the vehicle is initiated from position
(x; y; ) = ( 1; 0; 0). Figure 4.18 shows the reference trajectory along with the
measured outcome of the experiment. Clearly, the vehicle is able to follow the
path even through rather sharp turns. However, some overshooting is present which
increases with the degree of turning. This overshooting was not significant in the
simulation study and has to do with the dynamics of the vehicle and the motor
controllers. So far (including the simulation study), it has been assumed that the
calculated velocity references (v; ! ) are momentarily followed by the vehicle. This
is a simplification which is only fulfilled to some degree in a real physical system
due to dynamics in the motor controller loops. The actual vehicle velocities can
be measured using the wheel encoders and in Figure 4.19 these are depicted along
with the references calculated by the receding horizon approach from just before
the 150 turn until just after the turn. As expected, the tracking is not perfect. The
following list some plausible reasons.

4.3 / Receding Horizon Approach to Path Following

137

y [m]

0.5

1.5

1
x [m]

0.5

Figure 4.18. Results from a path following experiment with the test vehicle. Reference path
(dotted), measured path (solid).

v [m/s]

0.4
0.2
0

10

15

10

15

[rad/s]

1
0.5
0

0.5

Time [secs]

Figure 4.19. The actual measured velocities (solid) compared to the desired velocity references (dashed).

138

Chapter 4 / Case Study: a Mobile Robot






The motor controllers are badly tuned or system parameters are incorrectly
determined.
Friction (stiction and coulomb) introduces a delay in the tracking due to the
required integral action in the motor controllers.
Rate constraints (current limitations) may be active in the motor amplifiers
when rapidly changing the velocity references.
Uneven surfaces introduce disturbances.

No significant effort has been put into trying to reduce the velocity tracking problem because, despite the overshooting, the usability of the receding horizon approach is justified.
Figure 4.19 also shows how the forward velocity v is reduced (scaled) when large
angular velocities ! are required.

4.3.8 Conclusions
This section has presented a receding horizon controller to the path following problem for a mobile robot and the key results are the following.








A receding horizon approach provides a useful way of closed-loop controlling a mobile robot through sharp turns.
On-line velocity scaling is an easy way of respecting velocity constraints
without a significant increase in the computational burden.
The presented linear algorithm is simple and fast and is easily implemented
into a robot control system.
The simulation study indicates good robustness to the degree of turn and the
initial starting position.
Parameter tuning allows to specify the smoothness of the xy -trajectory near
the intersection point.
Experimental results have shown real-time usability of the approach but also
indicated, for this specific implementation, problems regarding the velocity
tracking capabilities.

4.4 / Posture Stabilization with Noise on Measurements

139

A drawback of the presented approach is the lack of optimality in the constrained


control problem. However, results reveal good performance and the benefits from
a fast real-time algorithm are significant.

4.4 Posture Stabilization with Noise on Measurements


A challenging control task for a unicycle mobile robot is posture stabilization
where the goal is to stabilize the position and orientation of the robot to a given
arbitrary constant reference r . Without loss of generality we assume the equilibrium is 0. The task arises in practical solutions in connection with for example
docking or parking of a vehicle. Due to the nonholonomic constraints on the robot
that restricts it from traveling sideways when the forward velocity tends to zero,
the solution to this kind of problem is non-trivial.
The following considers a control strategy which is capable of posture stabilizing
the robot. It turns out that the algorithm is particularly sensitive to measurement
noise due to the mobility constraints and calls for special attention when applying
it. The following will quantify the noises influence on the control performance and
will design filters reducing the effects and improving the stabilization.
A more elaborating analysis of the discussion in this section can be found in Bak
(2000).

4.4.1 Periodically Updated Open-loop Controls


The kinematic model of the unicycle robot in (4.1) can be changed into a simpler
form by the following local change of coordinates and control inputs

(x1 ; x2 ; x3 ) = (x; y; tan )



! 
(u1 ; u2 ) = v cos ; 2 ;
cos 
which transform the system in (4.1) into

x_ 1 = u1
x_ 2 = x3 u1
x_ 3 = u2 :

(4.70)

140

Chapter 4 / Case Study: a Mobile Robot

This system belongs to the class of chained systems or driftless systems. Note that
the model is only valid for  2] 2 ; 2 [. The original control signals are easily
reconstructed in the following way

1
u
cos  1
! = cos2 u2 :

v=

(4.71)

The controllability of the system is guaranteed by the full-rankedness of the Control


Lie Algebra3 but since the linearization of (4.70) is not stabilizable, it is necessary
to rely on nonlinear techniques for stabilizing control design. Brocketts condition (Brockett, 1983) implies furthermore that no pure-state feedback law u(x) can
asymptotically stabilize the system.
The Algorithm
Morin and Samson (1999) suggest an algorithm that achieves exponential stabilization of (4.70) along with robustness with respect to imperfect knowledge of
the systems control vector fields. The control strategy consists in applying periodically updated open-loop controls that are continuous with respect to state initial conditions. Let the state z = (x1 ; x2 ; x3 )T be observed by a measurement
y(k) = (y1 ; y2 ; y3 )T = z (k) at time t = kT , where k 2 Z and T is a constant
sampling time. On the time interval [kT ; (k + 1)T [ the control signals are then
defined by:

1
T
1
u2 (y; t) =
T

u1 (y; t) =

h
"

y1 (k) + 2ajy2 (k)j 2 sin(t)


1

y3 (k) + 2(k2

y (k )
1) 2 1 cos(t) ;
ajy2 (k)j 2

(4.72)

with the constraints

T = 2= ( 6= 0)
jk2 j < 1
a > 0:
3

(4.73)

The dimension of the vector space spanned at zero by all the Lie brackets of the vector fields fi
must equal the system order n (Isidori, 1995).

4.4 / Posture Stabilization with Noise on Measurements

141

The control parameters are thus a and k2 and to some degree T (or ) where k2
governs the convergence rate of x2 while a governs the size of the oscillations in
the x1 and x3 directions.
Figure 4.20 illustrates the mode of operation for the controller with k2
a = 0:25, T = 10 seconds, and initial posture z0 = ( 0:1m; 0:5m; 10 )T .

= 0:1,

0.2

Input u

0.5
0

10

20

30

40

50

y [m]

Robot

0.2

0.25

Input u

0.5

0
0.5
0

10

20
30
Time [secs]

40

0.2

50

0.3
x [m]

(b) xy -trajectory

(a) Input

Figure 4.20. Posture stabilization with exponential robust controller.

Stability
Discrete-time closed-loop stability is easily verified for this system. By applying
the controller (4.72) to the system (4.70), the solution to z_ on the time interval
[kT ; (k +1)T [ can be found by first integrating x_1 and x_ 3 and then x_ 2 . For example:

x1 (t) = x1 (k) +

Z t

kT

u1 (y;  )d:

(4.74)

Appendix B gives details on the calculations. Evaluating z (t) for t = (k +1)T , the
discrete-time closed-loop solution becomes

142

Chapter 4 / Case Study: a Mobile Robot

x1 (k + 1) = x1 (k) y1 (k)
1
1
x2 (k + 1) = x2 (k) + y1 (k)y3 (k) + ay3 (k)jy2 (k)j 2 + (k2
2
x3 (k)y1 (k)
x3 (k + 1) = x3 (k) y3 (k):

1)y2 (k)

(4.75)

Assuming perfect measurements (y (k ) = z (k )) we get

x1 (k + 1) = 0

x2 (k + 1) = k2 x2 (k) + x3 (k) ajx2 (k)j 2




= k2k+1 x20 + k2k x30 ajx20 j 2

x3 (k + 1) = 0;

(4.76)

1
x (k )
2 1

1
x
2 10

(4.77)
(4.78)

with initial conditions z (0) = (x10 ; x20 ; x30 )T . For jk2 j < 1, as required, x2 (k ) !
0 for k ! 1 and hence stability is obtained.

4.4.2 Noise Sensitivity


Consider the following example. Let y2 (k ) = x2 (k ) + e2 (k ) where fe2 g is a sequence of Gaussian white noise with mean value 0 and variance (0:005m)2 . Figure 4.21 shows how such a noisy measurement effects the steady-state behavior
of the posture stabilized system and considerably deteriorates both x1 and x3 . To
compensate for a small error in x2 , the vehicle needs to either make a large turn or
to make a large overshoot. Noise on the y1 or y3 measurements are not nearly as

States

0.1
0
0.1
0

50
Time [secs]

100

Figure 4.21. Noisy y 2 -measurements (), x 1 (dashed), x2 (dotted), and x 3 (solid).

4.4 / Posture Stabilization with Noise on Measurements

143

worrying since this will only influence x2 little.


Two kinds of white noise disturbing x2 (k ) will be considered: a uniform distribution on the interval [ e; e] and a Gaussian distribution with mean value 0 and
variance r2 . For the uniform distributed noise we shall consider the supremum values that z (t) takes on while for the Gaussian distributed noise we shall consider
mean values and variances.
If we assume that y1 (k ) and y3 (k ) are perfect measured, the steady-state solution
for z (t) where x1 (k ) = 0; x2 (k ) = 0 becomes

x1 (t) = ajy2 (k)j 2 (1 cos(t))


1
x2 (t) = y2 (k)(k2 1)(t cos(t) sin(t)) + x2 (k)
2
k2 1 y2 (k)
x3 (t) =
sin(t);
a jy2 (k)j 12
1

(4.79)

and for t = (k + 1)T we get:

x1 (k + 1) = 0
x2 (k + 1) = k2 x2 (k) + (k2 1)e2 (k)
= kk+1 x
2

x3 (k + 1) = 0:

20

+ (k2

1)

k
X
i=0

k2k ie2 (i)

(4.80)

(4.81)
(4.82)

The peak value for x1 is clearly for t = (k + 12 )T due to the cosines term while
x3 peaks for either t = (k + 14 )T or t = (k + 43 )T due to the sinus term. Finally,
x2 takes its maximum for t = kT or t = (k + 1)T due to the non-decreasing term
t cos(t) sin(t). Hence, given from (4.79) the relevant equations are:
1
1
x1 (k + ) = 2ajx2 (k) + e2 (k)j 2
2
x2 (k + 1) = k2 x2 (k) + (k2 1)e2 (k)
1
3
(k 1) (x2 (k) + e2 (k))
x3 (k + ) = x3 (k + ) = 2
:
4
4
a jx2 (k) + e2 (k)j 12

(4.83)

Table 4.3 lists the suprema, mean values, and variances for the states in (4.83)
given the two kind of noise disturbances. Details on the calculations can be found
in Bak (2000) where the case of noise an all states also is examined though only
approximative expressions are established.

144

Chapter 4 / Case Study: a Mobile Robot

Uniform
State

Supremum

x1 (k + 12 )
x2 (k)
x3 (k + 14 )
x3 (k + 34 )
y2 (k)

2a supfy2 g
1 k2
1 jk2 j e

Gaussian
Expectation

a p2 4

3
4

1 k2 psup
a

fy2 g



2
1+k2 r2

1 k2
1 jk2 j e + e

 14

Variancea
q q

4a2 2 1+2k2 r2
1 k2
1+k2 r2
q q
1 k2 2 2
2
a
 1+k2 r2
2
1+k2 r2

For x1 (k + 12 ), the second moment E x1 x1 is given.

Table
Quantification of noisy
R 1 n4.3.
1 e x dx.
x
0

y 2 measurements influence on system states. (n) =

As an example, consider the controller settings a = 0:25 and k2


mm measurement error on x2 we get x1 (k + 12 ) = 22mm.

= 0:1. With a 1

Regarding tuning of the controller, we can see from Table 4.3 that k2 should be
chosen positive and close to 1 to minimize the suprema. However, k2 close to one
will also increase the bias on x1 . A small value of a decreases the expectation
and variance on x1 but increases the variance on x3 . As expected is it a trade-off
between unintended turning and forward motion.

4.4.3 Filtering
Although the effect from the noise on x2 is dominant it is in terms of filtering
advantageous to assume noise on all three system states. This implies considering
a filter design for the nonlinear system in equation (4.75) where the measurement
y(k) will be replaced with an estimation z^(k) of the system state. The measurement
y(k) is assumed to be stochastically disturbed by white noise, i.e. a sequence of
mutually independent random variables with zero mean and covariance R. The
measurement of the system is thus given by

y(k) = z (k) + e(k); e(k) 2 F (0; R):

(4.84)

First, rewrite the system (4.75) as

z (k + 1) = z (k) + Bu(k) + f (z (k); u(k));

(4.85)

4.4 / Posture Stabilization with Noise on Measurements

145

where u(k ) = z^(k ) when state estimation is used. The vector function
all nonlinear terms while B is a constant matrix. They are given by
2
6

B=6
4
2

1
0
0 k2 1
0
0

f includes

0
7
07
5
1

(4.86)

0
6
0:5
f (z (k); u(k)) = 6
40:5u1 (k )u3 (k ) + au3 (k )ju2 (k )j
0

3
7

x3 (k)u1 (k)7
5:

(4.87)

For this system we will consider a Luenberger-like observer of the following type:

z^(k + 1) = z^(k) + Bu(k) + f (^z(k); z^(k))


+ K [y(k + 1) (^z (k) + Bu(k) + f (^z (k); z^(k))]
= (I K )[^z (k) + Bu(k) + f (^z (k); z^(k))] + Ky(k + 1);

(4.88)
(4.89)

which uses the most recent measurement to update the state.


By using y (k + 1)
determined:

= x(k + 1) + e(k + 1) the estimation error z~ = z

z~(k + 1) = z (k) + Bu(k) + f (z (k); u(k))


(I K )[^z (k) + Bu(k) + f (^z (k); z^(k))] Ky(k + 1)
2
3
0
6
7
7 Ke(k + 1):
= (I K )~z (k) + (I K ) 6
x
~
(
k
)^
x
(
k
)
3
1
4
5
0

z^ can be

(4.90)
(4.91)

In order to get a linear error system of which stability is easily guaranteed we can
~3 (k)^x1 (k) is canceled. The matrix could
choose K so that the nonlinear term x
be chosen time-varying as
2

1 0
6
6
K (k) = 4 0 2
0 0
yielding the error system

0
7
(1 2 )^x1 (k)7
5;
3

(4.92)

146

Chapter 4 / Case Study: a Mobile Robot


2

1 1
0
0
6
7
6
z~(k + 1) = 4 0
~(k) K (k)e(k + 1):
1 2
0 7
5x
0
0
1 3
Convergence of the estimation error
1; 2; 3.

z~ is then guaranteed

for

(4.93)

0 < i < 2; i =

Obviously, the choice of the s is a compromise between fast convergence and


steady-state noise rejection. Small gains provide excellent noise rejection capabilities but long convergence of the estimation error since more trust are put in the
kinematic model of the mobile vehicle. The process noise or model uncertainties
for the kinematic model mainly stem from movement of the robot due to inaccurate
physical parameters (covered in section 4.2) and finite encoder resolution. When
the speed of the robot tends to zero, the certainty of the model increases. All this
suggests letting the gains be a function of some norm of the posture error

i = f (kxk);

(4.94)

such that noise rejection is increased as the robot comes to a halt. Since the error
is unknown, the measurement is used instead. The usability of the measurements
depends on the estimation error being small upon reaching the equilibrium z = 0.
One way of selecting such a function is


1
i = 0i e kyk ; i = 1; 2; 3;

(4.95)

with

0 < 0i < 2;

kyk

0
11
q
3
X
p
= @ aj yj A

j =1

j j

8i = 1; 2; 3

p; q > 0; aj > 0:

(4.96)

The following properties are observed

i ! 0 for kyk ! 0
i ! 0i for kyk ! 1;
and since (4.95) is monotone, each  will be within the stability bounds.

(4.97)

4.4 / Posture Stabilization with Noise on Measurements

147

Simulation Study
Figure 4.22 shows the Euclidean norm4 of the estimation errors with time-varying
and constant s. The plots was created with the following parameter settings: p =
1; q = 2; ai = 18; and 0i = 1 for i = 1; 2; 3. For the constant gains, i = 0:2; i =
1; 2; 3 have be chosen. At steady-state, the improvement from using time-varying
gains in terms of the mean value of the normed estimation error is a factor 28 better
than with no filtering and a factor 9 better than with constant gain filtering.

0.4

||Estimation Error||2

||Estimation Error||2

1.5

x 10

0.5

25

Samples

100
Samples

(a) Convergence

(b) Steady-state

Figure 4.22. (a) Convergence of the estimation error for time-varying s (solid) and constant s (dashed). (b) Steady-state for time-varying s (solid), constant s (dashed), and
measurement noise (dotted).

Consider again the example from Section 4.4.2 that with controller settings a =
0:25 and k2 = 0:1 and a 1 mm measurement error on x2 estimated the supremum
of x1 (k + 21 ) to 22mm. With time-varying filtering, the same measurement error
gives a supremum of 1.6mm and with constant gains the supremum is 8.4mm.
4

kk =

The Euclidean norm is defined as x 2

pPn

i=1 xi .

148

Chapter 4 / Case Study: a Mobile Robot

4.4.4 Conclusions
This section has looked at the effect of noise on measurements used in closed-loop
to stabilize a 3 DoF chained system. Two kinds of noise has been considered 1)
bounded noise and 2) Gaussian white noise. The second objective of this section
has been filter design for reducing the effect of noise on the measurements in a
closed-loop controlled 3 DoF chained system.
The first part of the study considered noise on x2 but noiseless measurements of x1
and x3 and has evaluated expressions for suprema, expectations and covariances
for the states.
Regarding filtering, a Luenberger-like observer was investigated with a nonlinear
gain matrix canceling nonlinearities in the system model and resulting in a linear
estimation error system. This has the advantage that convergence of the estimation
error is easily guaranteed. A time-varying observer gain has been proposed which
is a function of the norm of the posture error.
An alternative to the presented nonlinear filter is given in Nrgaard et al. (2000a)
where state estimators for nonlinear systems are derived based on an interpolation
formula.

4.5 Summary
This chapter examined a mobile robot. Three sub-studies covered separate issues
on constraints in robotics. Section 4.2 looked at how to bypass difficulties concerning posture estimation caused by the limitations in the sensor package on-board
the robot. An on-line auto-calibration procedure was presented which can reduce
drifting of the odometry and hence extend exploration time and increase trust in
the overall estimation.
In section 4.3 a receding horizon approach was introduced for path following in
the presence of velocity constraints. The receding horizon enables the vehicle to
anticipate sharp turns and smooth the turning. By means of a velocity scaler, velocity constraints can be imposed on the wheels as well as the forward and angular
velocities. The velocities are set such that constraints are satisfied and the nominal
path followed.
Finally, section 4.4 was concerned with posture stabilization of nonholonomic constrained systems. It was established that noisy measurements induce large limit
cycles in the closed-loop stabilization. Appropriate filtering was suggested.

Chapter

Conclusions

This thesis is concerned with aspects involved in designing and implementing controllers for systems with constraints. This includes a comparative study of design
methodologies, development of a gain scheduling constrained controller, an outline of a robot controller architecture which incorporates closed-loop handling of
constraints, and a substantial mobile robot case study with explicit solutions to
problems arising from limited sensor readings, velocity constraints, and wheelconfiguration induced nonholonomic constraints. This chapter summarizes the research effort presented in the thesis.
Primarily, the studied constraints originate from saturating actuators or restrictions
on state or output variables. Both level and rate constraints are under considerations. In robotic systems with multi-layered controllers and integrated subsystems
constraints may take many shapes. Sufficient and reliable sensor information has
turned out to be a hurdle for closed-loop robot controllers. In mobile robotics, a
consistent flow of posture information is vital to solving many motion tasks. Much
effort has been put into fusing sensors but the sensor differences such as noisesignal ratio, sampling rate, delays, and data processing impede the performance
of the solutions. Another constraint is the so-called nonholonomic constraint that
stems from the mechanical construction of the system. It is basically a restriction
on the manoeuverability of the system.

150

Chapter 5 / Conclusions

The thesis started with an overview and comparison of three fundamentally different existing methodologies for constrained control systems, namely Anti-Windup
and Bumpless Transfer (AWBT), predictive control, and nonlinear control which
include rescaling and gain scheduling. Predictive control is the only systematic approach but mainly applies to linear systems and is computationally heavy due to
the on-line solving of the minimization problem. AWBT is a versatile approach
applicable to already implemented controllers but it can only manage input level
saturations. Nonlinear solutions originate from a desire to guarantee stability and
to improve performance during constraints. Most nonlinear solutions are system
specific and limited in applicability.
A new nonlinear gain scheduling approach to handling level actuator constraints
was introduced. The class of systems is restricted to chain of integrators and systems in lower companion form. The saturation is handled by scheduling the closedloop poles upon saturation. Good closed-loop performance is obtained through a
partly state partly time dependent scheduler. The saturation limits are easy to specify and the saturation compensation is determined by only three parameters. The
controller does not guarantee the control signal being within the limits at all time.
This is equivalent to the AWBT compensation.
In a case study the design of an AWBT compensation for an actuator constrained
PID controlled cascaded double tank system was treated. It was argued that the
compensation matrix has great influence on the closed-loop performance and hence
should be carefully selected. Moreover, the poles of the constrained controller
should be chosen close to the slowest poles of the unconstrained closed-loop system. The generality in this study is debatable except for systems of similar characteristics.
The architecture for robot control system with sensor-based closed-loop trajectory
control has been described. Traditionally, in robotics, trajectories are generated offline and fed to a tracking controller at certain given rate. Such a system will fail
to handle internal active system constraints or external uncertain events such as
upcoming obstacles. However, by incorporating relevant sensory information into
the trajectory executor, the trajectory can be modified or the execution rate changed
allowing for appropriate handling of the constraints. This way the robot can in a
closed-loop fashion react to events and constraints without aborting the motion and
embarking on replanning or switching to some contingency plan. Furthermore, the
handling of actuator constraints in the trajectory module provides good control of
the actual trajectory outcome for the constrained system.

151
Three sub-studies on auto-calibration, receding horizon control, and posture stabilization, was carried out for a mobile robot. The following conclusions were drawn.
On-line calibration of physical parameters can reduce the drifting of posture estimation and can be used to extend the period of time for exploration in unknown
environments or improve trajectory control such as path following. By splitting the
calibration procedure into two steps observability and hence convergence rate are
much improved. The accuracy of the estimated values are of the same order as a
calibration using existing manual procedures. The two filters are easy to implement
and the method is well suited for automating the calibration task allowing for more
autonomous vehicles.
A receding horizon approach was introduced for path following in the presence of
velocity constraints. The method can be applied to sharp turns. On-line velocity
scaling is an easy way of respecting velocity constraints while maintaining path
following without a significant increase in the computational burden. The algorithm is simple and fast and is easily implemented into a robot control system.
The simulation study indicated good robustness to the degree of turning and the
initial starting position. The parameter tuning allows to specify the smoothness of
the trajectory near the intersection point. The approach was experimentally verified
which exhibited some overshooting.
The nonholonomic constraints present in an unicycle mobile robot complicate the
task of posture stabilization. Noisy measurements are exceptionally problematic in
this case where measurement errors on the position of the sideways direction of the
robot should be avoided or limited. Through appropriate nonlinear filtering of the
measurements, the stabilization is improved.
The topic of constrained control systems is large and receives a lot of research interest in both journals and conferences. This thesis has touched aspects with relevance to controller design, robotics and in particular mobile robotics. Questions
have been answered, new ones have appeared. With the appearance of smarter
sensors, faster computers, and more reliable data processing, the design and implementation of truly autonomous control systems with capabilities of handling
constraints of all kindsactuator and state constraints along with environment imposed constraintsare of great interest and should attract further investigations.
Thank you for your attention.

Appendix

Linearized System for Estimating


Systematic Errors

The structure of the linearized augmented system for determination of the uncertainties k and b is given in equations (4.18) and (4.19). Here the matrices are
given explicit. Note, that c() and s() denote cos() and sin(), respectively.
2
6

A=6
4
2
6

F =6
4
2
6

G=6
4

1 0
0 1
0 0

u1 s()
u1 c()
1

3
7
7
5

c() 12 u1 s()
s() + 12 u1 c()


(A.1)

u s()
u c()

1
2 1
1
2 1

3
7
7
5

c() 12 u1 s() c() 21 u1 s()


s() + 12 u1 c() s() + 12 u1 c()

with following definitions

(A.2)
3
7
7;
5

(A.3)

154

Chapter A / Linearized System for Estimating Systematic Errors

 = x3 + u2
k k
= r r r l
2
kr r + kr l
=
x5 b
x k (2ka
= 4 r r
x25 b
1
 = (ka
xk)
2 4 r
xk
= 4 r
x5 b
2ka + x4 kr
=
x5 b
1
 = x4 kr :
2

(A.4)
(A.5)
(A.6)

x4 kr ) l

(A.7)
(A.8)
(A.9)
(A.10)
(A.11)

Appendix

Closed-loop Solution to Chained System

This appendix determines the closed-loop solution to the posture stabilized chained
system described in section 4.4.
Let a 3 DoF chained system be described by the equations

x_ 1 = u1
x_ 2 = x3 u1
(B.1)
x_ 3 = u2 :
The state z = (x1 ; x2 ; x3 )T is measured with y (k ) = (y1 ; y2 ; y3 )T = z (k ) at time
t = kT , where k 2 Z and T is a constant sampling time. On the time interval
[kT ; (k + 1)T [ the control signals (u1 ; u2 ) are then defined by:
i
1
1h
u1 (y; t) =
y1(k) + 2ajy2 (k)j 2 sin(t)
T"
#
(B.2)
1
y2 (k)
u2 (y; t) =
y3 (k) + 2(k2 1)
cos(
t
)
;
1
T
ajy2 (k)j 2
with the constraints

T = 2= ( 6= 0)
jk2 j < 1
a > 0:

(B.3)

156

Chapter B / Closed-loop Solution to Chained System

The control parameters are a and k2 and to some degree T (or ) where k2 governs
the convergence rate of x2 while a governs the size of the oscillations in the x1 and
x3 directions.
By applying the controller (B.2) to the system (B.1), the solution to z_ on the time
interval [kT ; (k + 1)T [ can be found by first integrating x_1 , then x_ 3 , and finally x_ 2 :

x1 (t) = x1 (k) +

Z t

kT

u1 (y;  )d
1
t kT
+ ajy2 (k)j 2 (1 cos(t))
T
u2 (y;  )d

= x1 (k) y1 (k)

x3 (t) = x3 (k) +

Z t

kT

= x3 (k) y3 (k)

t kT k2 1 y2 (k)
+
sin(t):
T
a jy2 (k)j 12

(B.4)

(B.5)

Now, given x3 and u1 we get for x2 (t):

x2 (t) = x2 (k) +

Z t

kT

x3 (t)u1 (y; t)dt

k2 1
t kT
y2 (k)(t cos(t) sin(t))
y (k)x3 (k)
2
T 1
0:5(t2 (kT )2 ) kT (t kT )
+
y3 (k)y1 (k)
T2
p
(k2 1) y1 (k)y2 (k)
p
(1
cos(
t
))
+
a
jy2(k)jx3 (k)(1 cos(t))
22 a
jy2(k)j
ap
+
jy2 (k)jy3(k)((t kT ) cos(t) sin(t)):
(B.6)
2

= x2 (k) +

Bibliography

Andersen, N. and Ravn, O. (1990). Vision as a multipurpose sensor in realtime


systems. In Proceedings of the ISMM International Symposium. Mini and Microcomputers and their Applications, pp. 196199.
om, K. J. (1970). Introduction to Stochastic Control Theory. Academic Press,
Astr
New York.
om, K. J. and Rundqwist, L. (1989). Integrator windup and how to avoid it.
Astr
In Proceedings of the American Control Conference, pp. 16931698, Pittsburgh,
Pennsylvania.
om, K. J. and Wittenmark, B. (1990). Computer Controlled Systems, Theory
Astr
and Practice. Prentice-Hall, Upper Sadle River, New Jersey, 2nd edition.
om, K. J. and Wittenmark, B. (1997). Computer Controlled Systems, Theory
Astr
and Practice. Prentice Hall, Upper Sadle River, New Jersey, 3rd edition.
Bak, M. (2000).
Filter design for robust stabilized chained system with
noise on measurements. Technical Report 00-E-895, http://www.iau.dtu.dk/
mba/homepage/research, Department of Automation, Technical University of
Denmark.
Bak, M., Larsen, T., Nrgaard, P., Andersen, N., Poulsen, N., and Ravn, O. (1998).
Location estimation using delayed measurements. In Proceedings of the 5th
International Workshop on Advanced Motion Control, pp. 180185, Coimbra,
Portugal.

158

Bibliography

Bak, M., Larsen, T. D., Andersen, N. A., and Ravn, O. (1999). Auto-calibration of
systematic odometry errors in mobile robots. In Proceedings of SPIE, Mobile
Robots XIV, pp. 252263, Boston, Massachusetts.
Bak, M., Poulsen, N. K., and Ravn, O. (2001). Receding horizon approach to path
following mobile robot in the presence of velocity constraints. Submitted to the
European Control Conference, Porto, Portugal.
Bazaraa, M., Sherali, H., and Shetty, C. (1993). Nonlinear programming. Theory
and algorithms. John Wiley and Sons, New York.
Bemporad, A., Marco, M. D., and Tesi, A. (1997). Wall-following controllers for
sonar-based mobile robots. In Proceedings of the 36th IEEE Conference on
Decision and Control, pp. 30633068, San Diego, California.
Bentsman, J., Tse, J., Manayathara, T., Blaukanp, R., and Pellegrinetti, G. (1994).
State-space and frequency domain predictive controller design with application
to power plant control. In Proceedings of the IEEE Conference on Control Applications, volume 1, pp. 729734, Glasgow, Scotland.
Bitmead, R. R., Gevers, M., and Wertz, V. (1990). Adaptive Optimal Control, The
Thinking Mans GPC. Prentice Hall, New York.
Borenstein, J., Everett, H. R., and Feng, L. (1996). Where am I? Sensors and methods for mobile robot positioning. Technical report, The University of Michigan.
Borenstein, J. and Feng, L. (1995). UMBmark: A benchmark test for measuring
odometry errors in mobile robots. In Proceedings of the 1995 SPIE Conference
on Mobile Robots, volume 2591, pp. 113124, Philadelphia, Pennsylvania.
Borenstein, J. and Feng, L. (1996). Measurement and correction of systematic
odometry errors in mobile robots. IEEE Transactions on Robotics and Automation, 12(6), 869880.
Brockett, R. (1983). Differential Geometric Control Theory. Birkhauser, Boston,
Massachusetts.
Camacho, E. (1993). Constrained generalized predictive control. IEEE Transactions on Automatic Control, 38(2), 327332.
Camacho, E. and Bordons, C. (1999). Model Predictive Control. Springer-Verlag,
London.

Bibliography

159

Chenavier, F. and Crowley, J. L. (1992). Position estimation for a mobile robot


using vision and odometry. In Proceedings of the 1992 IEEE International Conference on Robotics and Automation, volume 3, pp. 25882593, Nice, France.
Chisci, L., Lombardi, A., Mosca, E., and Rossiter, J. A. (1996). State-space approach to stabilizing stochastic predictive control. International Journal of Control, 65(4), 619637.
Chisci, L. and Zappa, G. (1999). Fast QP algorithms for predictive control. In
Proceedings of the 38th IEEE Conference on Decision and Control, pp. 4589
4594, Phoenix, Arizona.
Clarke, D., Mothadi, C., and Tuffs, P. (1987a). Generalized predictive control
Part I. The basic algorithm. Automatica, 23(2), 137148.
Clarke, D., Mothadi, C., and Tuffs, P. (1987b). Generalized predictive control
Part II. Extensions and interpretations. Automatica, 23(2), 149160.
Corke, P. I. (1996). Visual Control of Robots high performance visual servoing.
Research Studies Press Ltd./John Wiley and Sons.
Cox, I. J. (1989). Blanche: Position estimation for an autonomous robot vehicle. In
Proceedings of the IEEE/RSJ International Workshop on Intelligent Robots and
Systems, pp. 285292.
Craig, J. J. (1989). Introductions to Robotics, Mechanics and Control. AddisonWesley, Reading, Massachusetts.
Cutler, C. R. and Ramaker, B. L. (1980). Dynamic matrix control a computer
control algorithm. In Proceedings of JACC, San Francisco, California.
Dahl, O. (1992). Path Constrained Robot Control. PhD thesis, Lund Institute of
Technology.
de Wit, C. C., Siciliano, B., and Bastin, G. (1996). Theory of Robot Control.
Springer-Verlag, London.
Dickmanns, E. and Zapp, A. (1987). Autonomous high speed road vehicle guidance by computer vision. In Proceedings of the 10th IFAC World Congress,
volume 4, pp. 232237, Munchen, Germany.
Dornheim, M. A. (1992). Report pinpoints factors leading to YF-22 crash. Aviation
Week and Space Technology.

160

Bibliography

Doyle, J., Smith, R., and Enns, D. (1987). Control of plants with input saturation
nonlinearities. In Proceedings of the American Control Conference, pp. 2147
2152, Minneapolis, USA.
Doyle, J. C., Francis, B. A., and Tannenbaum, A. R. (1992). Feedback Control
Theory. Macmillan Publishing Company, New York.
Edwards, C. and Postlethwaite, I. (1996). Anti-windup and bumpless transfer
schemes. In Proceedings of the UKACC International Conference on Control,
pp. 394399.
Edwards, C. and Postlethwaite, I. (1997). Anti-windup schemes with closed-loop
stability considerations. In Proceedings of the European Control Conference,
Brussels, Belgium.
Edwards, C. and Postlethwaite, I. (1998). Anti-windup and bumpless transfer
schemes. Automatica, 34(2), 199210.
Ferruz, J. and Ollero, A. (1998). Visual generalized predictive path tracking. In
Proceedings of the 5th International Workshop on Advanced Motion Control
98, pp. 159164, Coimbra, Portugal.
Fertik, H. and Ross, C. (1967). Direct digital control algorithms with anti-windup
feature. ISA Transactions, 6(4), 317328.
Fierro, R. and Lewis, F. (1997). Control of a nonholonomic mobile robot: Backstepping kinematics into dynamics. Journal of Robotic Systems, 14(3), 149163.
Franklin, G., Powell, J., and Emani-Naini, A. (1991). Feedback Control of Dynamic Systems. Addison-Wesley, Reading, Massachusetts.
Garca, C. E. and Morshedi, A. M. (1984). Solution of the dynamic matrix control
problem via quadratic programming. In Proceedings of the Conference of the
Canadian Industrial Computing Society, pp. 13.113.3, Ottawa, Canada.
Garca, C. E., Prett, D. M., and Morari, M. (1989). Model predictive control:
Theory and practice a survey. Automatica, 25(3), 335348.
Graebe, S. and Ahlen, A. (1994). Dynamic transfer among alternative controllers.
In Proceedings of the 12th IFAC World Congress, volume 8, pp. 245248, Sydney, Australia.

Bibliography

161

Graebe, S. and Ahlen, A. (1996). Dynamic transfer among alternative controllers


and its relation to antiwindup controller design. IEEE Transactions on Control
Systems Technology, 4(1), 9299.
Graebe, S. F. and Ahlen, A. (1996). The Control Handbook, chapter 20.2 Bumpless
Transfer, pp. 381388. CRC Press, Boca Raton, Florida.
Hansen, A. D. (1996). Predictive Control and Identification. Applications to Steering Dynamics. PhD thesis, Department of Mathematical Modelling, Technical
University of Denmark.
Hanus, R. (1980). The conditioned control a new technique for preventing
windup nuisances. In Proceedings of IFIP ASSOPO, pp. 221224, Trondheim, Norway.
Hanus, R., Kinnaert, M., and Henrotte, J.-L. (1987). Conditioning technique, a
general anti-windup and bumpless transfer method. Automatica, 23(6), 729
739.
Hanus, R. and Peng, Y. (1992). Conditioning technique for controllers with time
delays. IEEE Transactions on Automatic Control, 37(5), 689692.
Isidori, A. (1995). Nonlinear Control Systems. Springer-Verlag, London.
Kapoor, N., Teel, A., and Daoutides, P. (1998). An anti-windup design for linear
systems with input saturation. Automatica, 34(5), 559574.
Kleeman, L. (1992). Optimal estimation of position and heading for mobile robots
using ultrasonic beacons and dead-reckoning. In Proceedings of the 1992 IEEE
International Conference on Robotics and Automation, volume 3, pp. 2582
2587, Nice, France.
Koh, K. and Cho, H. (1999). A smooth path tracking algorithm for wheeled mobile
robots with dynamic constraints. Journal of Intelligent and Robotic Systems:
Theory and Applications, 24(4), 367385.
Kothare, M. (1997). Control of Systems Subject to Constraints. PhD thesis, Division of Engineering and Applied Science, California Institute of Technology.
Kothare, M. and Morari, M. (1997). Stability analysis of anti-windup control systems: A review and some generalizations. In Proceedings of the European Control Conference, Brussels, Belgium.

162

Bibliography

Kothare, M. V., Campo, P. J., Morari, M., and Nett, C. N. (1994). A unified framework for the study of anti-windup designs. Automatica, 30, 18691883.
Kothare, M. V. and Morari, M. (1999). Multiplier theory for stability analysis of
anti-windup control systems. Automatica, 35(5), 917928.
Kouvaritakis, B., Rossiter, J., and Cannon, M. (1998). Linear quadratic feasible
predictive control. Automatica, 34(12), 15831592.
Kuznetsov, A. and Clarke, D. (1994). Advances in Model-based Predictive Control,
chapter Application of Constrained GPC for Improving Performance of Controlled Plants. Oxford University Press.
Larsen, T., Bak, M., Andersen, N., and Ravn, O. (1998). Location estimation for
an autonomously guided vehicle using an augmented kalman filter to autocalibrate the odometry. In Proceedings of the 1998 International Conference on
Multisource-Multisensor Information Fusion, pp. 245250, Las Vegas, Nevada.
Larsen, T. D. (1998). Optimal Fusion of Sensors. PhD thesis, Department of
Automation, Technical University of Denmark.
Lauvdal, T. and Murray, R. M. (1999). Stabilization of a pitch axis flight control
experiment with input rate saturation. Modeling, Identification and Control,
40(4), 225240.
Lauvdal, T., Murray, R. M., and Fossen, T. I. (1997). Stabilization of integrator
chains in the presence of magnitude and rate saturation, a gain scheduling approach. In Proceedings of the 36th IEEE Conference on Decision and Control,
pp. 40044005, San Diego, California.
Lildballe, J. (1999). Distributed Control of Autonomous Systems. PhD thesis,
Department of Automation, Technical University of Denmark.
Luenberger, D. (1984). Linear and Nonlinear Programming. Addison-Wesley,
Reading, Massachusetts.
Mayne, D. Q., Rawlings, J. B., Rao, C. V., and Scokaert, P. O. M. (2000). Constrained model predictive control: Stability and optimality. Automatica, 36(6),
789814.
McKerrow, P. J. (1991). Introduction to Robotics. Adddison-Wesley, Sydney.

Bibliography

163

MCloskey, R. T. and Murray, R. M. (1997). Exponential stabilization of driftless


nonlinear control systems using homogeneous feedback. IEEE Transactions on
Automatic Control, 42(5), 614628.
Middleton, R. H. (1996). The Control Handbook, chapter 20.1 Dealing with Actuator Saturation, pp. 377381. CRC Press, Boca Raton, Florida.
Morin, P., Murray, R., and Praly, L. (1998a). Nonlinear rescaling and control laws
with application to stabilization in the presence of magnitude saturation. In Proceedings of the IFAC NOLCOS, volume 3, pp. 691696, Enschede, The Netherlands.
Morin, P., Pomet, J.-B., and Samson, C. (1998b). Developments in time-varying
feedback stabilization of nonlinear systems. In Proceedings of the IFAC NOLCOS, volume 3, pp. 587594, Enschede, The Netherlands.
Morin, P. and Samson, C. (1999). Exponential stabilization of nonlinear driftless
systems with robustness to unmodeled dynamics. ESAIM. Control, Optimisation
and Calculus of Variations, 4, 135.
Murata, S. and Hirose, T. (1993). Onboard locating system using real-time image processing for a self-navigating vehicle. IEEE Transactions on Industrial
Electronics, 40(1), 145154.
Murray, R. M. (1999). Geometric approaches to control in the presence of magnitude and rate saturations. Technical Report 99-001, Division of Engineering and
Applied Science, California Institute of Technology.
Niu, W. and Tomizuka, M. (1998). A robust anti-windup controller design for
motion control system with asymptotic tracking subjected to actuator saturation.
In Proceedings of the 37th IEEE Conference on Decision and Control, pp. 15
20, Tampa, Florida.
Nrgaard, M., Poulsen, N., and Ravn, O. (1998). The AGV-sim version 1.0 a simulator for autonomous guided vehicles. Technical report, Department of
Automation, Technical University of Denmark.
Nrgaard, M., Poulsen, N., and Ravn, O. (2000a). New developments in state
estimation for nonlinear systems. Automatica, 36(11), 16271638.
Nrgaard, M., Ravn, O., Poulsen, N. K., and Hansen, L. K. (2000b). Neural Networks for Modelling and Control of Dynamic Systems. Springer-Verlag, London.

164

Bibliography

Normey-Rico, J., Gomez-Ortega, J., and Camacho, E. (1999). Smith-predictorbased generalized predictive controller for mobile robot path-tracking. Control
Engineering Practice, 7(6), 729740.
Peng, Y., Vrancic, D., Hanus, R., and Weller, S. (1998). Anti-windup designs for
multivariable controllers. Automatica, 34(12), 15591665.
Poulsen, N. K., Kouvaritakis, B., and Cannon, M. (1999). Constrained predictive
control and its application to a coupled-tanks apparatus. Technical Report 2204,
Department of Engineering Science, University of Oxford.
Praly, L. (1997). Generalized weighted homogeneity and state dependent time
scale for linear controllable systems. In Proceedings of the 36th IEEE Conference on Decision and Control, pp. 43424347, San Diego, California.
Richalet, J., Rault, A., Testud, J. L., and Papon, J. (1978). Model predictive heuristic control: applications to industrial processes. Automatica, 14(5), 413428.
Ronnback, S. (1993). Linear Control of Systems with Actuator Constraints. PhD
thesis, Lulea University of Technology, Department Computer Science and Electrical Engineering, Division of Automatic Control.
Rossiter, J., Kouvaritakis, B., and Rice, M. (1998). A numerically robust statespace approach to stable-predictive control strategies. Automatica, 34(1), 6573.
Sampei, M., , Tamura, T., Itoh, T., and Nakamichi, M. (1991). Path tracking control
of trailer-like mobile robot. In Proceedings of IEEE/RSJ International Workshop
in Intelligent Robots and Systems, volume J, pp. 193198, Osaka, Japan.
Samson, C. (1992). Path following and time-varying feedback stabilization of
wheeled mobile robot. In Proceedings of the International Conference on Advanced Robotics and Computer Vision, volume 13, pp. 1.11.5, Singapore.
Samson, C. and Ait-Abderrahim, K. (1991). Feedback control of a nonholonomic
wheeled cart in cartesian space. In Proceedings of IEEE International Conference on Robotics and Automation, pp. 11361141, Sacramento, California.
Soeterboek, R. (1992). Predictive Control, A Unified Approach. Prentice Hall,
New York.
Stein, G. (1989). Bode lecture: Respect the unstable. In Proceedings of the 28th
IEEE Conference on Decision and Control, Tampa, Florida.

Bibliography

165

Sussmann, H. J., Sontag, E. D., and Yang, Y. (1994). A general result on the
stabilization of linear systems using bounded controls. IEEE Transactions on
Automatic Control, 39(12), 24112425.
Tarn, T.-J., Bejczy, A. K., Guo, C., and Xi, N. (1994). Intelligent planning and control for telerobotic operations. In Proceedings of the IEEE/RSJ/GI International
Conference on Intelligent Robots and Systems, volume 1, pp. 389396.
Tarn, T.-J., Xi, N., and Bejczy, A. K. (1996). Path-based approach to integrated
planning and control for robotic systems. Automatica, 32(12), 16751687.
Teel, A. R. (1992). Global stabilization and restricted tracking for multiple integrators with bounded controls. System & Control Letters, 18, 165171.
Tsakiris, D. P., Samson, C., and Rives, P. (1996). Vision-based time-varying stabilization of a mobile manipulator. In Proceedings of the Fourth International
Conference on Control, Automation, Robotics and Vision, ICARCV, Singapore.
Tsang, T. T. C. and Clarke, D. W. (1988). Generalized predictive control with input
constraints. IEE Proceedings D (Control Theory and Applications), 135(6), 451
460.
von der Hardt, H.-J., Wolf, D., and Husson, R. (1996). The dead reckoning localization system of the wheeled mobile robot: Romane. In Proceedings of the
1996 IEEE/SICE/RSJ International Conference on Multisensor Fusion and Integration for Intelligent Systems, pp. 603610, Washington D.C.
Walgama, K., Ronnback, S., and Sternby, J. (1992). Generalisation of conditioning
technique for anti-windup compensators. In Proceedings of the IEE Proceedings
D, volume 139, pp. 109118.
Walgama, K. and Sternby, J. (1990). Inherent observer property in a class of antiwindup compensators. International Journal Control, 52(3), 705724.
Wang, C. M. (1988). Location estimation and uncertainty analysis for mobile
robots. In Proceedings of the 1988 IEEE International Conference on Robotics
and Automation, volume 3, pp. 12301235, Philadelphia, Pennsylvania.

Index

A
Actuator saturation, see Saturation
Actuators, 85
Adaptation, 53
Anti-windup, 7, 1229, 50, 63, 78
ad hoc, 16
classical, 17
comparison, 35
conditional integration, 17
conditioning technique, 24
generalized conditioning technique,
25
generic framework, 27
Hanus conditioning technique,
24, 25, 71
incremental algorithm, 17
integration stop, 16
observer-based, 20, 25, 65
similarities with bumpless transfer, 30
Automatic control, 13
AWBT, see Anti-windup, Bumpless
transfer
B
Biproper, 21, 35
Bounded control, 5053

Bumpless transfer, 12, 3035, 50


bi-directional, 34
dynamic, 33
similarities with anti-windup, 30
C
Calibration, 99, 102117
Camera, 86, 100, 102, 112, 114
Cautiousness, 26, 35
Chain of integrators, 56
Chained system, 140
Chattering, 17, 61
Chernobyl, 2
Collision avoidance, 99
Companion form, 60
Constraints, 12, 87, 119121
effect on performance, 47
hard, 5, 37
magnitude, 78, 79
modelling, 45
nonholonomic, 100, 139
rate, 46, 78, 79
safety concerns, 47
soft, 5, 48
state, 47
velocity, 100
Control Lie Algebra, 140

168
Control strategies
comparison, 63
Controller
active, 30
latent, 30
substitution, 13
Criterion selection, 38
D
Dead-zone, 13
Discretization, 20
Double integrator, 52, 54, 55
Double tank, 65, 67, 78
Driftless system, 140
Dynamic rescaling, 51, 63
Dynamic transfer, 33
E
Encoder, 85, 100105, 135
F
Framework
generic, 27
G
Gain scheduling, 5563
extension, 60
scaling, 56
scaling factor, 59
Generic framework, 27
GPS, 86
Gripen JAS 39 aircraft, 2
Guide mark, 100, 112, 114
H
Homogeneous system, 51
Horizon
control, 39, 43
maximum costing, 39
minimum costing, 39

Index
prediction, 39, 125, 126, 128, 132
Hysteresis, 13, 17
I
Initial conditions, 31
Integrator windup, 12
K
Kalman filter, 103, 105, 116
extended, 105
L
Laser range scanner, 86, 100
LQG control, 38
Lyapunov function, 52, 53
M
Manipulator, 83
Manual control, 13
Map building, 99
Mobile robot, 99102
unicycle, 100, 139
Model-based predictive control, see
predictive control
Motion control, 86, 99
N
Navigation, 99, 102
Noise
Gaussian, 142
process, 106
rejection, 146
sensitivity, 142
Nonlinear method, 5063
O
Observer, 20, 27
Luenberger, 145
Odometry, 102, 104, 107, 112
Open-loop control, 140

Index
Over-design, 1
Overshoot, 12, 48, 55, 56, 61, 67, 70,
80
P
Path following, 117139
Path planning, 102
Path velocity controller, 91
Perception, 84
PI controller, 2, 31, 66
PID controller, 12, 16, 19, 69, 74
Pole placement, 58
Pole placement controller, 38
Pole placement controller, 7477, 79
Posture
definition, 101
stabilization, 100, 139
Predictive control, 3650, 65, 78
constrained, 7, 45, 63, 78
generalized, 37
suboptimal constrained, 49
unified, 37, 50
Predictor
j -step, 42
Process noise, 106
Pump, 68
Q
Quadratic programming, 38, 45, 49
R
Reasoning, 84, 86
Receding horizon, 36, 117139
linear, 127
nonlinear, 124
Rescaling, 5155
Reset windup, 12
Retro-fitted design, 11, 80
Rise time, 55

169
Robot
definition of, 83
Robot control system
elements of, 8486
RST controller, 22, 24
S
Saturation, 12
actuator, 12, 14
bounds, 55
definition of, 4
effect on performance, 12
guidelines, 14
instability, 2
level, 26
Schur decomposed, 23
Schur matrix, 23
Sensor fusion, 99
Sensor fusion, 86
Settling time, 12, 55, 56
Single integrator, 66
Sliding mode control, 65
Sonar, 86
Stability
asymptotical, 24, 53, 140
general result, 5
Stepper motor, 17
T
Time-varying, 50
Trajectory
execution, 86
generation, 86
Trajectory control
general architecture, 92
non-timebased, 91
velocity control, 91
V
Velocity scaling, 127

170
Velocity scaling, 123124
Visual servoing, 86
Y
YF-22 aircraft, 2

Index

Potrebbero piacerti anche