Sei sulla pagina 1di 56

EE704 Control Systems II

Anith Krishnan June 16, 20121

1 Get

the latest document from www.scribd.com/anithkrishnan

Note
This document is intended to help the engineering students who study control systems. Ideas from the most popular literature available, together with my own has been given in the most simplest language. Although a few worked out problems are present, they might not be complete. The notes as such, is not complete to the full extend and I apologize for this at this early stage. All the notes given are typeset by myself, that includes the gures. There might be typographic errors that could have crept in while the preparation. I will be happy if the reader can point out such mistakes/errors. Any help from your side will be acknowledged.

ANITH KRISHNAN
anithkrishnan@gmail.com

Contents
1 Nonlinear Control Systems 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Forms of non-linearity . . . . . . . . . . . . . . . . . . . . 1.3 Characteristics of Non-linear Systems . . . . . . . . . . . . 1.4 Non-linear Phenomena . . . . . . . . . . . . . . . . . . . . 1.5 System Models . . . . . . . . . . . . . . . . . . . . . . . . 1.5.1 Generalized Model of a Dynamic System . . . . . . 1.5.2 Unforced State Equation . . . . . . . . . . . . . . . 1.5.3 Autonomous System . . . . . . . . . . . . . . . . . 1.6 Equilibrium Point . . . . . . . . . . . . . . . . . . . . . . . 1.6.1 Jacobian method of Linearization . . . . . . . . . . 1.6.2 Classication of Equilibrium Points . . . . . . . . . 1.7 Analysis of Nonlinear Systems . . . . . . . . . . . . . . . . 1.8 Phase Plane Method Analysis . . . . . . . . . . . . . . . . 1.8.1 Isocline Method . . . . . . . . . . . . . . . . . . . . 1.8.2 Delta Method . . . . . . . . . . . . . . . . . . . . . 1.9 Describing Function Analysis . . . . . . . . . . . . . . . . 1.9.1 Describing Function of Single Valued Non-linearities . . . . . . . . . . . . . 1.9.2 Describing function of Saturation non-linearity . . . 1.9.3 Describing function of ideal relay non-linearity . . . 1.9.4 Describing function of Dead-zone nonlinearity . . . 1.9.5 Describing function of piecewise linear non-linearity 1.9.6 Double valued non-linearity . . . . . . . . . . . . . 1.9.7 Limitations of Describing Function Analysis . . . . 1.9.8 Accuracy of Describing Function Analysis . . . . . 1.10 Popovs Criterion . . . . . . . . . . . . . . . . . . . . . . . 4 4 4 5 5 7 7 8 8 9 10 10 12 12 14 15 15 17 19 22 23 25 29 29 29 30

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

2 Digital Control Systems 33 2.1 Discrete-time control systems . . . . . . . . . . . . . . . . . . 33 2.2 Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2

CONTENTS 2.2.1 Types of Sampling . . . . . . . 2.2.2 Impulse Sampling . . . . . . . . Mapping between s-plane and z-plane . Hold Circuits . . . . . . . . . . . . . . The Z Transform . . . . . . . . . . . . Stability Analysis . . . . . . . . . . . . 2.6.1 Jurys Stability Test . . . . . . Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 33 34 35 36 36 36 37 37 39 39 41 41 42 43 44 44 46 46 46 46 46 47 48 48 49 50 50 50 51 51 51 51

2.3 2.4 2.5 2.6 2.7

3 Stochastic Control 3.1 Introduction . . . . . . . . . . . . . . . . . . . 3.2 Stochastic Process . . . . . . . . . . . . . . . 3.3 Stochastic System . . . . . . . . . . . . . . . . 3.4 Autocorrelation . . . . . . . . . . . . . . . . . 3.4.1 Properties . . . . . . . . . . . . . . . . 3.5 Crosscorrelation . . . . . . . . . . . . . . . . . 3.6 Ergodicity . . . . . . . . . . . . . . . . . . . . 3.7 Stationary Process . . . . . . . . . . . . . . . 3.7.1 Strict Sense Stationary Process . . . . 3.7.2 Wide Sense Stationary (WSS) Process 3.8 Power Spectral Density . . . . . . . . . . . . . 3.8.1 Properties of SX () . . . . . . . . . . . 3.9 Gaussian Process . . . . . . . . . . . . . . . . 3.10 Markov Process . . . . . . . . . . . . . . . . . 3.11 Gauss-Markov Process . . . . . . . . . . . . . 3.12 Kalman Filter . . . . . . . . . . . . . . . . . . A Fourier Series A.1 Fourier Series . . . . . . . . . . . . . . . A.2 Eulers Formula . . . . . . . . . . . . . . A.2.1 Other forms of Eulers formula . . A.3 Conditions for a Fourier expansion . . . A.4 Functions having points of discontinuities A.5 Even and Odd functions . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

B Useful Formulas and Equations 53 B.1 Trigonometry . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 B.2 Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

Chapter 1 Nonlinear Control Systems


1.1 Introduction

A non-linear system is dened as one which does not satisfy the superposition property. A major point about non-linear systems is that their response is amplitude dependent so that if a particular form of response, or some measure of it, occurs for one input magnitude, it may not result for some other input magnitude. Perhaps the most interesting aspect of non-linear systems is that they exhibit forms of behaviour not possible in linear systems.

1.2

Forms of non-linearity
(a) Friction (b) Backlash (c) Spring

1. Inherent Nonlinearity

2. Intentional Nonlinearity Note: Because of the powerful tools we know for linear systems, the rst step in analyzing a non-linear system is usually to linearize it, if possible, about some nominal operating point and analyze the resulting linear model. There are two basic limitations of linearization: 1. Since linearization is an approximation in the neighborhood of an operating point, it can only predict the local behaviour of the non-linear 4

CHAPTER 1. NONLINEAR CONTROL SYSTEMS

system in the vicinity of that point. It cannot predict the non-local behaviour far from the operating point, and certainly not the global behaviour throughout the state space. 2. The dynamics of a non-linear system is much richer than the dynamics of a linear system. These are essentially non-linear phenomena that can take place only in the presence of non-linearity; hence, they cannot be described or predicted by linear models.

1.3

Characteristics of Non-linear Systems

1. The response of a non-linear system to a particular test signal is no guide to their behaviour to other inputs, since the principle of superposition does not hold good for non-linear systems. 2. The non-linear system response may be highly sensitive to input amplitude. The stability study of non-linear systems requires the information about the type and amplitude of the anticipated inputs, initial conditions etc. in addition to the usual requirement of the mathematical model. 3. The non-linear systems may exhibit limit cycles which are self-sustained oscillations of xed frequency and amplitude. 4. The non-linear systems may have jump resonance in the frequency response. 5. The output of a non-linear system will have harmonics and sub-harmonics when excited by sinusoidal signals. 6. The non-linear system will exhibit phenomena like frequency entrainment and asynchronous quenching.

1.4

Non-linear Phenomena

1. Finite escape time: The state of an unstable linear system goes to innity as time approaches innity. However, for a non-linear system, the state can go to innity in nite time. 2. Multiple isolated equilibria: A linear system can have only one isolated equilibrium point. Hence it can have only one steady-state operating point which attracts the state of the system, irrespective of the initial

CHAPTER 1. NONLINEAR CONTROL SYSTEMS

state. A non-linear system can have more than one isolated equilibrium point. The state may converge to one of several steady-state operating points, depending on the initial state of the system. 3. Limit Cycle: When the response of a nonlinear system is oscillatory with constant amplitude and frequency, then these oscillations are called limit cycles. A limit cycle is an isolated closed trajectory in the phase plane. All the neighbouring trajectories in the phase plane either spirals towards it (stable limit cycle) or spirals away (unstable limit cycle) from it. Limit cycles can occur only in nonlinear systems. In a linear system exhibiting oscillatory response, closed trajectories are neighboured by other closed trajectories. 4. Sub-harmonic, harmonic or almost periodic oscillations: When a nonlinear system is excited by a sinusoidal signal, the response or output will have steady-state oscillations whose frequency is an integral submultiple of the forcing frequency. These oscillations are called subharmonic oscillations. The generation of subharmonic oscillations depends on the system parameters and initial conditions. It also depends on amplitude and frequency of the forcing functions. When the input frequency is changed, these subharmonic oscillations may disappear or sometimes change its frequency of oscillation. 5. Chaos: A non-linear system can have a more complicated steady-state behaviour that is not equilibrium, periodic oscillation, or almost periodic oscillation. Such behaviour is usually referred to as chaos. Some of these chaotic motions exhibit randomness, despite the deterministic nature of the system. 6. Jump resonance or multiple modes of behaviour 7. Frequency-amplitude dependence 8. Frequency entrainment: If a periodic force of frequency is applied to a system capable of exhibiting a limit cycle of frequency 0 , the phenomena called beats 1 is observed. As the dierence between the two frequencies decreases, the beat frequency also decreases. In a linear system, it is found that the beat frequency decreases indenitely as approaches 0 . In a self-excited nonlinear system, the frequency 0 of the limit cycle falls in synchronistically with, or is entrained by, the
1

The beat is the oscillation whose frequency is 0 .

CHAPTER 1. NONLINEAR CONTROL SYSTEMS

forcing frequency , within a certain band of frequencies. This phenomenon is called frequency entrainment and the band of frequencies in which entrainment occurs is called the zone of entrainment.

Figure 1.1: | 0 | versus curve showing zone of frequency entrainment. The gure 1.1 shows the relationship between | 0 | and . For a linear system, the relationship would follow the dotted line and | 0 | would be zero for only one value of . 9. Asynchronous quenching: In a non-linear system which exhibits a limit cycle of frequency 0 , it is possible to quench the limit cycle oscillation by forcing the system at a frequency 1 , where 1 and 0 are not related to each other. This phenomenon is called asynchronous quenching or signal stabilisation. None of the non-linear phenomena are exhibited by linear systems and thus cannot be explained by linear theory. In order to study nonlinear phenomena, we must solve the nonlinear dierential equations analytically or graphically.

1.5
1.5.1

System Models
Generalized Model of a Dynamic System

A dynamical system can be modeled by a nite number of coupled rst-order equations x1 = f1 (t, x1 , x2 , . . . , xn , u1 , u2 , . . . , up ) x2 = f2 (t, x1 , x2 , . . . , xn , u1 , u2 , . . . , up ) . . . xn = fn (t, x1 , x2 , . . . , xn , u1 , u2 , . . . , up )

CHAPTER 1. NONLINEAR CONTROL SYSTEMS

where xi (i = 1 to n) denotes the derivative of xi with respect to the time variable t, and u1 , u2 , . . . , up are the specied input variables. The variables x1 , x2 , . . . , xn are called the state variables. The represent the memory that the dynamical system has of its past. The above equation can be represented using vector notation as f1 (t, X, U ) u1 x1 f2 (t, X, U ) u2 x2 F = U =. X= . . . . . . . . fn (t, X, U ) xn up X = F (t, X, U ) (1.1)

Sometimes eq.(1.1) is associated with another equation called the output equation given by Y = H(t, X, U ) (1.2) Equations (1.1) & (1.2) together are called as the state-space model.

1.5.2

Unforced State Equation

An unforced state equation is a state equation without the explicit presence of the input variables U . X = F (t, X) (1.3) Working with an unforced state equation does not necessarily mean that the input to the system is zero. It could be that the input has been specied as a given function of time, U = G(t) or a given function of the state U = G(X) or a function of both U = G(t, X). U can also be a constant. Substitution of U = G() in eq.(1.1) eliminates U and yields an unforced state equation.

1.5.3

Autonomous System

A special case of eq.(1.3) arises when the function F does not explicitly depend on time t, that is X = F (X) (1.4) in which case, the system is said to be autonomous. The behaviour of an autonomous system is invariant to shifts in the time origin, since changing the time variable from t to = t a does not change the R.H.S. of the state equation (1.4). If the system is not autonomous, then it is called non-autonomous.

CHAPTER 1. NONLINEAR CONTROL SYSTEMS

1.6

Equilibrium Point

A point in state space X = X is said to be an equilibrium point of an autonomous system, given by eq.(1.4), if it has the property that whenever the states of the system starts at X , it will remain at X for all future time. Alternately, a point in state-space at which the derivative of all the state variables are zero is called an equilibrium point or singular point. The equilibrium points can thus be found by X = F (X) = 0 Unlike linear systems, non-linear systems can have either one equilibrium point or multiple equilibrium points. If there are no other equilibrium points in the neighbourhood of a given equilibrium point, then this equilibrium point is called an isolated equilibrium point. Although most of the practical systems have isolated equilibrium points, there are some cases where this is not true. For example, the system dened by x+x=0 has all the points on the x axis as the equilibrium points. Thus the entire x axis contains the continuum of equilibrium points of the system. Equilibrium points are also called as singular points or critical points. Example 1.1: Find the equilibrium points of the following system x1 = x3 x1 1 x2 = 2x2 Solution: x1 = x3 x1 = 0 1 x2 = 2x2 = 0 gives x2 = 0. x3 x1 = 0 1 x2 1 = 0 1 gives x1 = 0 gives x1 = 1

The equilibrium points are (0, 1), (0, 0) and (0, 1). Example 1.2: Find the equilibrium points of the following system x1 = x2 x2 = 9 sin x1 x2 5

CHAPTER 1. NONLINEAR CONTROL SYSTEMS Solution: x1 = x2 = 0 x2 = 9 sin x1 gives x2 = 0. Therefore x2 =0 5

10

9 sin x1 = 0

gives x1 = n. The equilibrium points are (n, 0).

1.6.1

Jacobian method of Linearization

The following is a simple method used for linearizing a non-linear plant. Let the second-order non-linear autonomous system be dened as x1 f (x , x ) = 1 1 2 x2 f2 (x1 , x2 ) (1.5)

In-order to nd the eigen values of a system, we need to nd the system matrix A of the linearized system. The matrix A can be calculated as A=
f1 x1 f2 x1 f1 x2 f2 x2 x1 =x ,x2 =x 1 2

(1.6)

(x , x ) is an equilibrium point. This method gives the best linear approxi1 2 mation to a dierentiable function near the equilibrium point. The matrix A is also called the Jacobian matrix.

1.6.2

Classication of Equilibrium Points

Equilibrium points are classied according to the eigen values of the linearized system. The non-linear system is rst linearized at each singular point and the eigen value of the corresponding linear system is calculated, which is then utilized for classifying the equilibrium point. The classication explained here is for second order systems as we usually restrict our analysis to two dimensional state-space when we go for phase plane analysis2 . There are only two eigen values for a second order system. The classication of the equilibrium points will help us to determine the behaviour of the system near the singular point.
2

See section 1.7

CHAPTER 1. NONLINEAR CONTROL SYSTEMS

11

If the eigen values of the Jacobian matrix evaluated at the equilibrium point are not equal to zero or are not purely imaginary, then the trajectories of the system around the equilibrium point behave the same way as the trajectories of the associated linear system3 . Example 1.3: For each of the following system, nd all the equilibrium points and determine the type of each isolated equilibrium point. (1) x1 = x2 x2 = x1 + (2) x1 = x1 + x2 x2 = 0.1x1 2x2 x2 0.1x3 1 1 (3) x1 = x2 = Solution (1): Equating the RHS to zero x2 = 0 x1 + x3 1 x2 = 0 6 x3 1 x2 6

substituting x2 = 0 in the second equation x1 + x3 1 =0 6

this is a third order equation and will have three solutions. One immediate solution is x1 = 0. The equation now simplies to x2 1 = 0 x1 = 6 6 The equilibrium points are ( 6, 0), (0, 0) and ( 6, 0). 1 +
3

Poincare-Lyapunov Theorem

CHAPTER 1. NONLINEAR CONTROL SYSTEMS

12

In order to nd the type of singular point, the system has to be linearized at the equilibrium point and the corresponding eigen values are to be found. The linearized system matrix is given by eq.(1.6). Thus A= =
x1 x1

(x2 )
x3 1 6

x2

(x2 )
x3 1 6

x1 +
x2 1 2

x2

x2

x1 +

x2

0 1 +

1 1

0 1 . The characteritic equation is given by At ( 6, 0), A = 2 1 2 + 2 = 0 The eigen values are 2, 1. Since eigen values are real with opposite = signs, the equilibria ( 6, 0) is a saddle. 0 1 At (0, 0), A = . 1 1

1.7

Analysis of Nonlinear Systems

Unlike linear systems, there are no completely general methods for the analysis of nonlinear systems. We shall discuss three methods that are used for the stability analysis of non-linear systems: phase plane analysis, describing function method and the Lyapunovs stability criterion.

1.8

Phase Plane Method Analysis


x1 = f1 (x1 , x2 ) x2 = f2 (x1 , x2 )

Let us consider a second-order autonomous non-linear system dened as

The phase plane of the above system will be two dimensional with the state x1 on one axis and the state x2 on the other axis as shown in gure 1.2.

CHAPTER 1. NONLINEAR CONTROL SYSTEMS

13

Figure 1.2: Phase Plane Here x1 is taken on the x-axis and x2 on the y-axis. Whatever be the system, it will have an initial condition at time t = 0 which is dened as (x1 (0), x2 (0)). As time traverses from t = 0 to t = the system will move from (x1 (0), x2 (0)) to (x1 (), x2 ()). This trace of the states (or system) on the phase plane as time t increases is called is the phase trajectory for a given initial condition. The variable t does not occur explicitly on the phase plane trajectory, but the arrow indicates the increase of time. For each set of initial conditions, there will be a unique trajectory in the phase plane. All these trajectories, for various initial conditions are together called as the phase portrait of the system. The slope of any curve in a x y plane is m = dy/dx. Similarly, the slope of any curve in the phase plane is given by m= x2 f2 (x1 , x2 ) = x1 f1 (x1 , x2 ) (1.7)

Two methods that are used for constructing the phase trajectory are: isocline method and delta method. These are also called as graphical methods. Trajectories can also be constructed used analytical methods or even experimentally. Graphical methods are useful when it is very dicult or rather impossible to solve the given dierential equation analytically. Graphical methods can be applied to both linear and non-linear systems. The accuracy of graphical methods depend a lot on the way it is constructed. Obviously, the degree of accuracy increases when the plot is made more big. Thus, a small error (that is practically inevitable) will accumulate at each step and may cause the solution to be far away from the actual value at a later point. It must also be understood that all the parameters should be given an initial value (numerical value) to proceed with the construction of the trajectory. The graphical methods are usually considered to be qualitative methods and only the overall behaviour of the system is important. This is enough for judging the stability of the system.

CHAPTER 1. NONLINEAR CONTROL SYSTEMS

14

1.8.1

Isocline Method

Let us start o with the construction of phase trajectory of a linear system dened as d2 x dx + 3 + 4x = 0 (1.8) 2 dt dt Let us rst determine the equilibrium point and also its type. This dierential equation can be converted to state space form by taking x1 = x and x2 = x1 . The system now becomes x1 = x2 x2 = 4x1 3x2 By observing the above two equations, it is clear that the equilibrium point is (0, 0). The matrix A is 0 1 A= 4 3 The next step is to nd the eigen values of the system. I A = = therefore | I A| = 2 + 3 + 4 The eigen values are = 1.5 i 1.323. Since the eigen values are complex conjugates with negative real parts, the equilibrium point is a stable focus. The slope of the trajectory at any point in the phase plane is given by m= Rearranging the eq.(1.9), 4 x1 (1.10) m+3 Selecting dierent values of m and substituting in eq.(1.10) will give the equation for several lines. For example, let m = 1. This when substituted in eq.(1.10) gives x2 = x1 x2 = x2 4x1 3x2 = x1 x2 (1.9) 0 0 1 0 4 3 1 4 +3

CHAPTER 1. NONLINEAR CONTROL SYSTEMS

15

This line is then drawn on the phase plane (see g. 1.3). Whenever the phase trajectory passes this line, the slope of the trajectory will be m = 1. The dashed line shows how the direction in which the trajectory will pass through, on any point on the line.

Figure 1.3: An Isocline in the phase plane. Thus each value of m will give an equation for a line and whenever the trajectory passes through any point on this line, it will have the slope m. Such lines are thus called as isoclines, since the trajectory follows the same slope when it passes through this line. A number of lines (more the better) are thus plotted for various vales of m. Slope, m 1 2 Isocline Equation x2 = x1 x2 = 4 x 1 5 x2 = 0

Let us assume an initial condition (x1 = 1, x2 = 0). Example 1.4: A linear second order servo is described by the equation
2 e + 2n e + n e = 0

where = 0.15, n = 1 rad/sec, e(0) = 1.5 and e(0) = 0. Determine the singular point and construct the phase trajectory using the method of isoclines.

1.8.2

Delta Method

1.9

Describing Function Analysis

Suppose that the input to a non-linear element is sinusoidal. The output of the non-linear element is, in general, not sinusoidal. Suppose that the output

CHAPTER 1. NONLINEAR CONTROL SYSTEMS

16

is periodic with the same period as the input. (The output contains higher harmonics, in addition to the fundamental harmonic component). In describing function analysis, it is assumed that only the fundamental harmonic component of the output is signicant4 . Such an assumption is often valid since the higher harmonics in the output of a non-linear element are often of smaller amplitude than the amplitude of the fundamental harmonic component. In addition, most control systems are low-pass lters, with the result that the higher harmonics are very much attenuated compared with the fundamental harmonic component. It is no wonder that this linearization may not always work. It usually gives suciently accurate information about the system stability and limit cycles, but does not give a trustworthy information about transient response. The describing function or sinusoidal describing function of a non-linear element is dened to be the complex ratio of the fundamental harmonic component of the output to the input. That is, N= Y1 1 X (1.11)

where N = describing function X = amplitude of input sinusoid Y1 = amplitude of the fundamental harmonic component of the output 1 = phase shift of the fundamental harmonic component of the output If no energy storage element is included in the non-linear element, then N is a function only of the amplitude of the input to the element. On the other hand, if energy storage elements are included, then N is a function of both the amplitude and frequency of the input. In calculating the describing function for a given non-linear element, we need to nd the fundamental harmonic component of the output. For the sinusoidal input x(t) = X sin t to the non-linear element, the output y(t) may be expressed as a Fourier series as follows: A0 y(t) = + [An cos nt + Bn sin nt] 2 n=1 A0 = + Yn sin (nt + n ) 2 n=1
4

(1.12) (1.13)

Kochenburger criteria

CHAPTER 1. NONLINEAR CONTROL SYSTEMS where 1 1 An = 1 Bn = A0 = Yn =


2

17

y(t) dt
0 2

(1.14) (1.15) (1.16) (1.17) (1.18)

y(t) cos (nt) dt


0 2

y(t) sin (nt) dt


0

2 A2 + Bn n An n = tan1 Bn

If the non-linearity is symmetric5 , then A0 = 0. The fundamental harmonic component (n = 1) of the output is y1 (t) = A1 cos t + B1 sin t = Y1 sin (t + 1 ) The describing function is then given by N= Y1 1 X 1 (B1 + jA1 ) = X 2 A2 + B1 1 = tan1 X (1.21) (1.22) A1 B1 (1.23) (1.19) (1.20)

If the non-linearity is replaced by a describing function, then all the linear theory frequency domain techniques can be used for the analysis of the system. The describing functions are used only for stability analysis and it is not directly applied to the optimization of the system design. The describing function is a frequency domain approach and no general correlation is possible between time and frequency domain responses.

1.9.1

Describing Function of Single Valued Non-linearities

1. Piecewise linear non-linearity


5

if the output is assumed to be zero mean.

CHAPTER 1. NONLINEAR CONTROL SYSTEMS

18

K2
K1

KN = K2 +

2(K1 K2 )

sin1

M M + X X

M2 X2

; X M (1.24)

2. Pre-load non-linearity

KN = K + 3. Pre-load with dead-zone non-linearity

4M X

(1.25)

D
D

CHAPTER 1. NONLINEAR CONTROL SYSTEMS

19

KN = K

2K sin1

D X

(4 2K)D X

D2 X2

(1.26)

1.9.2

Describing function of Saturation non-linearity

The input x is sinusoidal and is dened as x = X sin t (1.27)

Since the output waveform is symmetric over half cycles, the equation of

CHAPTER 1. NONLINEAR CONTROL SYSTEMS y for the period t = 0 to is given by Kx ; 0 t y = KM ; t Kx ; t Since the function y is odd, A1 = 0 and B1 = 2 y sin t dt 0 2 = Kx sin t dt + 0 2K = x sin t dt + 0

20

(1.28)

KM sin t dt +

Kx sin t dt x sin t dt

M sin t dt +

Substituting x = X sin t = 2K X

sin2 t dt +
0

M sin t dt + X

sin2 t dt

and integrating the three terms separately, the rst term

sin2 t dt =
0

= = = = The second term,

1 cos 2t dt 2 0 1 1 cos 2t dt dt 2 0 0 2 1 1 sin 2t t 2 2 2 0 0 1 1 0 sin 2 0 2 4 1 sin 2 2 4


(1.29)

M sin t dt = M

sin t dt

= M cos t

= M cos ( ) cos = M cos cos = 2M cos (1.30)

CHAPTER 1. NONLINEAR CONTROL SYSTEMS The third term,

21

1 1 t sin 2t 2 4 1 1 = ( ) sin 2 sin (2 2) 2 4 1 = sin 2 (1.31) 2 4 Substituting the equations (1.29), (1.30) and (1.31) in the equation for B1 ,

sin2 t dt =

B1 =

2K 1 sin 2 + 2M cos + 2 4 2K 1 X sin 2 + 2M cos = 2

1 sin 2 2 4

when the input reaches M , the magnitude is given by M = X sin . Therefore 1 2K X sin 2 + 2X sin cos 2 2KX 1 = sin 2 + sin 2 2 2KX = [ + sin cos ] The fundamental harmonic component is given by B1 = Y1 =
2 A2 + B1 = B1 = 1

(1.32)

2KX [ + sin cos ] (1.33)

1 = tan1

A1 = 0 B1

The describing function of the saturation nonlinearity is N= Y1 2K 1 = [ + sin cos ] 0 X (1.34)

Since the non-linearity contains two parts (taking only the positive side), the behaviour will be dierent according to the amplitude of the input signal. There can be two cases here; when X < M and when X > M . Case I When X < M When the input amplitude, X is less than M , the input output relation will be linear, i.e. y = Kx. Thus means that the angle = . Thus the describing function is 2 N = K0 (1.35)

CHAPTER 1. NONLINEAR CONTROL SYSTEMS Case II When X > M N= Y1 2K 1 = [ + sin cos ] 0 X

22

(1.36)

Since the input-output relationship of the non-linearity does not contain the explicit value of , the non-linearity can be rewritten by substituting = sin1 M , X N= Y1 2K 1 1 = sin X M X + M X 1 M X
2

0 (1.37)

1.9.3

Describing function of ideal relay non-linearity

The input x is sinusoidal and is dened as x = X sin t (1.38)

CHAPTER 1. NONLINEAR CONTROL SYSTEMS

23

The output waveform is symmetric over half cycles, the equation of y is given by M ; 0 t y= (1.39) M ; t 2 Since the function y is odd, A1 = 0 and considering the symmetry of the output B1 = 2 M sin t dt 0 2M cos t = 0 4M =

(1.40)

The fundamental harmonic component is given by Y1 =


2 A2 + B1 = B1 = 1

4M (1.41)

1 = tan1

A1 = 0 B1

The describing function of the ideal relay nonlinearity is N= Y1 4M 1 = 0 X (1.42)

1.9.4

Describing function of Dead-zone nonlinearity

The input x is sinusoidal and is dened as x = X sin t (1.43)

The output waveform is symmetric over half cycles, the equation of y is given by 0 ; 0 t y = K(x D) ; t (1.44) 0 ; t

CHAPTER 1. NONLINEAR CONTROL SYSTEMS

24

Since the function y is odd, A1 = 0 and considering the symmetry of the output 2 y sin t dt B1 = 0 2 = 0 dt + K (x D) sin t dt + 0 2K = (x D) sin t dt Substituting x = X sin t, 2K X 2K = X = = = 2K X 2

0 dt

sin2 t dt D

sin t dt

1 cos 2t dt + D cos t 2

sin 2t 2

+ D cos cos

2K X 2

1 [ sin 2 sin 2] 2D cos 2

from the response, it is clear that D = X sin . Therefore B1 = 1 ( 2 + sin 2) 2 sin cos 2 sin 2 + sin 2 2 2 sin 2 2 2 2 = KX 1 ( + sin cos ) 2KX 2KX = 2KX =

(1.45)

The fundamental harmonic component is given by Y1 =


2 A2 + B1 = B1 = KX 1 1

2 ( + sin cos ) (1.46)

1 = tan1

A1 = 0 B1

CHAPTER 1. NONLINEAR CONTROL SYSTEMS The describing function of the dead-zone nonlinearity is N= Y1 2 1 = K 1 ( + sin cos ) 0 X

25

(1.47)

Since the non-linearity contains two parts (taking only the positive side), the behaviour will be dierent according to the amplitude of the input signal. There can be two cases here; when X < D and when X > D. Case I When X < D When the input amplitude, X is less than D, the output is zero. Thus the describing function is N = 0 0 Case II When X > D N= Y1 2 1 = K 1 ( + sin cos ) 0 X (1.49) (1.48)

Since the input-output relationship of the non-linearity does not contain the explicit value of , the non-linearity can be rewritten by substituting D = sin1 X , N= D X + D X 1 D X
2

0 (1.50)

2 Y1 1 = K 1 sin1 X

1.9.5

Describing function of piecewise linear non-linearity

The input x is sinusoidal and is dened as x = X sin t (1.51)

CHAPTER 1. NONLINEAR CONTROL SYSTEMS

26

Since the output waveform is symmetric over half cycles, the equation of y for the period t = 0 to is given by K1 x ; 0 t (1.52) y = K1 M + K2 (x M ) ; t K1 x ; t Since the function y is odd, A1 = 0 and B1 = 2 2 =

y sin t dt
0

K1 x sin t dt +
0

[K1 M + K2 (x M )] sin t dt

+ 2 = +

K1 x sin t dt
2

K1 X sin t dt +
0

K1 X sin2 t dt

0 0

[K1 M + K2 (X sin t M )] sin t dt

CHAPTER 1. NONLINEAR CONTROL SYSTEMS Integrating the terms separately,


27

K1 X sin t dt = K1 X
0 0

sin2 t dt

K1 X = (1 cos 2t) dt 2 0 K1 X sin 2t = t 2 2 0 0 K1 X = [ sin cos ] 2


(1.53)

[K1 M + K2 (X sin t M )] sin t dt =


K1 M sin t dt

K2 X sin2 t dt K2 M sin t dt

=K1 M cos t

K2 X + 2

(1 cos2t) dt

K2 M

cos t t

K2 X =2K1 M cos + 2 =2M cos (K1 K2 ) +

sin 2t 2

2K2 M cos

K2 X 1 2 ( sin 2 sin 2) 2 2 K2 X =2M cos (K1 K2 ) + [ 2 + sin 2] 2


(1.54)

K1 X sin2 t dt = K1 X

sin2 t dt

= = = =

K1 X (1 cos 2t) dt 2 K1 X sin 2t t 2 2 K1 X 1 [ ( )] [sin 2 sin(2 2)] 2 2 K1 X [ sin cos ] (1.55) 2

CHAPTER 1. NONLINEAR CONTROL SYSTEMS

28

Combining equations (1.53), (1.54) and (1.55) and substituting M = X sin , B1 = 2 K1 X [ sin cos ] + 2X sin cos (K1 K2 ) 2 K2 X K1 X + [ 2 + sin 2] + [ sin cos ] 2 2 K2 = X K1 [ sin cos ] + 2 sin cos (K1 K2 ) + 2 K2 + K2 sin cos 2X K2 + (K1 K2 ) + (K1 K2 ) (sin cos ) 2 2 = X K2 + (K1 K2 ) ( + sin cos ) = The fundamental harmonic component is given by Y1 =
2 A2 + B1 = B1 = X K2 + 1

(1.56)

2 (K1 K2 ) ( + sin cos ) (1.57)

1 = tan1

A1 = 0 B1

The describing function of the piecewise linear nonlinearity is N= Y1 2 1 = K2 + (K1 K2 ) ( + sin cos ) 0 X (1.58)

Since the non-linearity contains two parts (taking only the positive side), the behaviour will be dierent according to the amplitude of the input signal. There can be two cases here; when X < M and when X > M . Case I When X < M When the input amplitude, X is less than M , K2 = K1 and the second term in the magnitude part of eq. (1.58) vanishes. Thus the describing function is N = K1 0 (1.59) Case II When X > M N= 2 Y1 1 = K2 + (K1 K2 ) ( + sin cos ) 0 X (1.60)

CHAPTER 1. NONLINEAR CONTROL SYSTEMS

29

Since the input-output relationship of the non-linearity does not contain the explicit value of , the non-linearity can be rewritten by substituting = sin1 M , X N = K2 + 2 (K1 K2 ) sin1 M X + M X 1 M X
2

0 (1.61)

1.9.6

Double valued non-linearity

For a given input, the output is double valued and out of those two values, which particular output will result, will depend upon the history of the input. This type of non-linearity has thus got inherent memory and is referred to as memory type non-linearity.

1.9.7

Limitations of Describing Function Analysis

1. Although this method can be used for the analysis of systems that have only one non-linearity, it cannot be used for systems with two or more non-linearities in them. This is due to the fact that the product of the two describing functions is not dened. 2. Non-linear devices which produce periodic waves having large amplitude harmonics will cause ltering problems. 3. The location of the non-linear device may aect the stability.

1.9.8

Accuracy of Describing Function Analysis

Note that the amplitude and frequency of the limit cycle indicated by the intersection of the 1/KN locus and G(j) locus are approximate values. If the 1/KN locus and the G(j) locus intersect almost perpendicularly, then the accuracy of the describing function analysis is generally good. (If the higher harmonics are all attenuated, the accuracy is excellent. Otherwise the accuracy is good to fair.) If the G(j) locus is tangent, or almost tangent, to the 1/KN locus, then the accuracy of the information obtained from the describing function analysis depends on how well G(j) will attenuate the higher harmonics. In some cases, there is a sustained oscillation; in other cases, there is no such oscillation. It depends on the nature of G(j). One can say, however, that the system is almost on the verge of exhibiting a limit cycle when the 1/KN locus and G(j) locus are tangent to each other.

CHAPTER 1. NONLINEAR CONTROL SYSTEMS

30

1.10

Popovs Criterion

V. M. Popov obtained a frequency domain criterion, which is quite similar to the Nyquist criterion, as a sucient condition for asymptotic stability of an important class of nonlinear systems. Popovs stability criterion applies to a closed-loop control system that consists of a non-linear element and a linear time-invariant plant as shown in the gure 1.4. The nonlinearity is described by a functional relation that must lie in the rst and the third quadrants, as shown in the gure 1.5.

Figure 1.4: Feedback control system for Popovs Stability criterion.

Figure 1.5: Sector. Many control systems with a single nonlinearity can be modeled by the block-diagram and the non-linear characteristics of gure 1.4 and 1.5, respectively. Popovs stability criterion is based on the following assumptions; 1. The transfer function G(s) of the linear part of the system, has more poles than zeros and there are no pole-zero cancellations.

CHAPTER 1. NONLINEAR CONTROL SYSTEMS

31

2. The nonlinear characteristic is bounded by k1 and k2 as shown in gure 1.5, i.e. k1 N (e) k2 (1.62) The Popovs criterion can be stated as follows: The closed-loop system is asymptotically stable if the Nyquist plot of G(j) does not intersect or enclose the circle, which is described by k1 + k2 x+ 2k1 k2
2

k2 k1 +y = 2k1 k2
2

(1.63)

x and y denote the real and imaginary coordinates of the G(j), respectively. The Popovs stability criterion is sucient but not necessary. If the preceeding condition is not satised, it does not mean that the system is unstable. Popovs stability criterion can be illustrated geometrically, as shown in the gure 1.6.

Figure 1.6: Geometric interpretation of Popovs criteria. In practice, the system non-linearities (in a great majority) are with k1 = 0. In that case, the circle of equation (1.63) is replaced by a straight line, given by 1 (1.64) x= k2 For stability, the Nyquist plot of G(j) must not intersect this line. This fact is illustrated in gure 1.7.

CHAPTER 1. NONLINEAR CONTROL SYSTEMS

32

Figure 1.7: Geometric interpretation of Popovs criteria when k1 = 0. For linear systems, k1 = k2 = k and it is interesting to observe that the circle of equation (1.63) degenerates to the (1 + j0) point and Popovs stability criterion becomes Nyquists criterion.

Chapter 2 Digital Control Systems


2.1 Discrete-time control systems

Discrete-time control systems are control systems in which one or more variables can change only at discrete instants of time. These instants, which we shall denote by kT or tk (k = 0, 1, 2, . . . ), may specify the times at which some physical measurements performed or the times at which the memory of a digital computer is read out. The time interval between two discrete instants is taken to be suciently short that the data for the time between them can be approximated by simple interpolation. Discrete-time control systems dier from continuous-time control systems in that signals for a discrete-time control system are in sampled-data form or in digital form. If a digital computer is involved in a control system as a digital controller, any sampled data must be converted into digital data. Continuous-time systems, whose signals are continuous in time, may be described by dierential equations. Discrete-time systems, which involve sampled-data signals or digital signals and possibly continuous-time signals as well, may be described by dierence equations after the appropriate discretization of continuous time signals.

2.2
2.2.1

Sampling
Types of Sampling

1. Periodic Sampling: In periodic sampling, the sampling instants are equally spaced. This is the most conventional type of sampling. 2. Multiple order sampling: Here a sampling pattern is repeated periodically. For example, (k + r)T kT is kept a constant for all k. 33

CHAPTER 2. DIGITAL CONTROL SYSTEMS

34

3. Multiple rate sampling: In a control system having multiple loops, the largest time constant involved in one loop may be quite dierent from that in other loops. Hence, it may be advisable to sample slowly in a loop involving a large time constant, while in a loop involving only small time constants, the sampling rate must be fast. Thus a digital control system may have dierent sampling periods in dierent feedback paths. 4. Random Sampling: Sampling is done at random intervals.

2.2.2

Impulse Sampling

The output of the sampler is considered to be train of impulses that begins with t = 0, with the sampling period equal to T and the strength of each impulse equal to the sampled value of the continuous time signal at the corresponding sampling instant. The output of the sampler x (t) can be represented as x (t) =
k=0

x(kT ) (t kT )

(2.1) (2.2)

x (t) = x(0) (t) + x(T ) (t T ) + . . . + x(kT ) (t kT ) + . . . Taking the Laplace transform X (s) = x(0) + x(T )esT + x(2T )e2sT + . . .

(2.3) (2.4)

=
k=0

x(kT )eksT

Let us dene esT = z 1 s = ln(z) T (2.5) (2.6)

or Therefore, X (s)

1 s= T ln(z)

=
k=0

x(kT ) z k

(2.7) (2.8)

= X(z) Note that z is a complex variable.

CHAPTER 2. DIGITAL CONTROL SYSTEMS

35

2.3

Mapping between s-plane and z-plane

Figure 2.1: Mapping of s-plane to z-plane. In the s-plane, the region of stability is the left-half plane. If the transfer function, G(s), is transformed into sampled-data transfer function, G(z), the region of stability on the z-plane can be evaluated from the denition, z = esT . The Laplace variable s, can be represented as + j. Thus, z = esT = eT (+j) = eT ejT = eT (cos T + j sin T ) = eT T (2.9) since (cos T + j sin T ) = 1 T . Each region of s-plane can be mapped into a corresponding region on the z-plane (see Figure 2.1). Points that have positive values of are in the right half of the s-plane, region C. From Eq.(2.9), the magnitudes of the mapped points are eT > 1. Thus, points in the right half of s-plane map into points outside the unit circle on the z-plane. Points on the j axis, region B, have zero values of and yield points on the z-plane with magnitude = 1, the unit circle. Hence, points in the j axis in the s-plane map into the points on the unit circle on the z-plane. Finally, points on the s-plane that yield negative values of (left half of s-plane, region A) map into the inside of the unit circle on the zplane. Thus, a digital control system is stable if all the poles of the closed-loop z- transfer function are inside the unit circle on the z- plane, unstable if any pole is outside the unit circle and/or if there are multiple poles on the unit circle, and marginally stable if one pole is on the unit circle and all other poles are inside the unit circle.

CHAPTER 2. DIGITAL CONTROL SYSTEMS

36

2.4

Hold Circuits

The function of the hold operation is to reconstruct the analog signal that has been transmitted as a train of pulse signals. That is, the purpose of the hold operation is to ll in the spaces between sampling periods and thus roughly reconstruct the original analog input signal.

2.5

The Z Transform

The z transform of a time function x(t), where t is nonnegative, or of a sequence of values x(kT ), where k takes zero or positive integers and T is the sampling period, is dened by the following equation:

X(z) = Z [x (t)] = Z [x (kT )] =


k=0

x(kT )z k

(2.10)

2.6

Stability Analysis
C(z) G(z) = R(z) 1 + GH(z)

Consider the following closed-loop pulse-transfer function: (2.11)

The stability of the system dened by Equation (2.11), may be determined from the location of the closed-loop poles in the z plane, or the roots of the characteristic equation P (z) = 1 + GH(z) = 0 as follows: 1. For the system to be stable, the closed-loop poles or the roots of the characteristic equation must lie within the unit circle in the z plane. Any closed-loop pole outside the unit circle makes the system unstable. 2. If a simple pole lies at z = 1, then the system becomes critically stable. Also, the system becomes critically stable if a single pair of conjugate complex poles lies on the unit circle in the z plane. Any multiple closed-loop pole on the unit circle makes the system unstable. 3. Closed-loop zeros do not aect the absolute stability and therefore may be located anywhere in the z plane. (2.12)

CHAPTER 2. DIGITAL CONTROL SYSTEMS

37

2.6.1

Jurys Stability Test

This stability test can be applied directly to the characteristic equation P (z) = 0 without solving for the roots. There are two test involved 1. Test for necessary condition The conditions are: P (1) > 0 (1) P (1) > 0
n

(2.13) (2.14)

2. Test for sucient condition In order to test for the sucient condition, one must rst construct the Jurys stability table as shown below. Row 1 2 3 4 5 6 . . . 2n 3 where z0 a0 an b0 bn1 c0 cn2 . . . r0 z1 a1 an1 b1 bn2 c1 cn3 . . . r1 z2 a2 an2 b2 . . . r2 . . . z nk ank ak z n2 an2 a2 bn2 b1 cn2 c0 z n1 an1 a1 bn1 b0 zn an a0

2.7

Exercise
P (z) = z 4 1.2z 3 + 0.07z 2 + 0.3z 0.08 = 0

E 2.1: Examine the stability of the following characteristic equation1

E 2.2: Examine the stability of the following characteristic equation2 P (z) = z 3 1.1z 2 0.1z + 0.2 = 0 E 2.3: A control system has the following characteristic equation P (z) = z 3 1.3z 2 0.08z + 0.24 = 0
1 2

Refer [2] Refer [2]

CHAPTER 2. DIGITAL CONTROL SYSTEMS Determine the stability of the system3 .

38

E 2.4: Consider the discrete-time unity-feedback control system (with sampling period T = 1s) whose open-loop transfer function is given by G(z) = K(0.3679z + 0.2642) (z 0.3679)(z 1)

Determine the range of gain K for stability by use of the Jurys stability test4 . E 2.5: Consider the system described by y(k) 0.6y(k 1) 0.81y(k 2) + 0.67y(k 3) 0.12y(k 4) = x(k) where x(k) is the input and y(k) is the output of the system. Determine the stability of the system5 .

3 4

Refer [2] Refer [2] 5 Refer [2]

Chapter 3 Stochastic Control


3.1 Introduction

Apart from being open or closed-loop, a control system can be classied according to the physical nature of the laws obeyed by the system, and the mathematical nature of the governing dierential equations. To understand such classications, one should have a clear understanding of the term state. The state of a system is any set of physical quantities which need to be specied at a given time in order to completely determine the behaviour of the system. From the above denition, one needs to emphasize on the word determine. So what does the word determine mean and how is it important in this context? The word determine means to calculate or to nd out the present state of the system. Does that mean that there are non-deterministic states or systems? With the above brief explanation, we shall move on to the classication of control systems as deterministic and non-deterministic systems. A control system is said to be deterministic when the set of physical laws governing the system are such that if the state of the system at some time (called the initial conditions) and the input are specied, then one can precisely predict the state at a later time. Since the characteristics of a deterministic system can be found merely by studying its response to initial conditions, such systems are often studied by taking the applied input to be zero. A response to initial conditions when the applied input is zero depicts how the systems state evolves from some initial time to that at a later time. Obviously, the evolution of only a deterministic system can be determined. Going back to the denition of state, it is clear that the latter is arrived at

39

CHAPTER 3. STOCHASTIC CONTROL

40

keeping a deterministic system in mind, but the concept of state can also be used to describe systems that are not deterministic. Most (if not all) natural processes are non-deterministic systems. A system that is not deterministic is either stochastic, or has no laws (random) governing it. A stochastic (also called probabilistic) system has such governing laws that although the initial conditions (i.e. state of a system at some time) are known in every detail, it is impossible to determine the systems state at a later time. In other words, based upon the stochastic governing laws and the initial conditions, one could only determine the probability of a state, rather than the state itself. When we toss a perfect coin, we are dealing with a stochastic law that states that both the possible outcomes of the toss (head or tail) have an equal probability of 50 percent. A random system is one which has no apparent governing physical laws. While it is a human endeavour to ascribe physical laws to observed natural phenomena, some natural phenomena are so complex that it is impossible to pin down the physical laws obeyed by them. The human brain presently appears to be a random system. Environmental temperature and rainfall are outputs of a random system. It is very dicult to practically distinguish between random and stochastic systems. Also, frequently we are unable to practically distinguish between a non-deterministic system and a deterministic system. whose future state we cannot predict based upon an erroneous measurement of the initial condition. Such a problem of unpredictability is highlighted by a special class of deterministic systems, namely chaotic systems. A system is called chaotic if even a small change in the initial conditions produces an arbitrarily large change in the systems state at a later time. A double pendulum is a classic example of unpredictable deterministic (or chaotic) system.

Figure 3.1: Double Pendulum

CHAPTER 3. STOCHASTIC CONTROL

41

While we are unable to predict the state of a random process, we can evolve a strategy to deal with such processes when they aect a control system in the form of noise. Such a strategy has to be based on a branch of mathematics dealing with unpredictable systems, called statistics.

3.2

Stochastic Process

A stochastic process (random process, or random function) can be dened as a family of random variables X(t), t T . The variables are indexed with the parameter t which belongs to the set T , the index set, or the parameter set. It is further assumed that the random variables X(t) take values on the real line or in an n-dimensional Euclidean space.

3.3

Stochastic System

Since the initial state X(t0 ) of a stochastic system is insucient to determine the future state X(t), we have to make an educated guess as to what the future state might be, based upon a statistical analysis of many similar systems, and taking the average of their future states at a given time, t. For example, if we want to know how a stray dog would behave if oered a bone, we will oer N stray dogs N dierent bones, and record the state variables of interest, such as intensity of the sound produced by each dog, the forward acceleration, the angular position of the dogs tail, and perhaps the intensity of the dogs bite as functions of time. Suppose Xi (t) is recorded state vector of the ith dog. Then the mean state-vector is dened as follows. Xm (t) = 1 N
N

Xi (t)
i=1

(3.1)

Note that Xm (t) is the state vector we would expect after studying N similar stochastic systems. Hence, it is also called the expected value of the state vector x(t) and denoted by Xm (t) = E [X(t)]. The expected value operator, E [], has the following properties: 1. E [random signal] = mean of random signal 2. E [deterministic signal] = deterministic signal 3. E [X1 (t) + X2 (t)] = E [X1 (t)] + E [X2 (t)]

CHAPTER 3. STOCHASTIC CONTROL 4. E [CX(t)] = CE [X(t)] ; C = constant matrix 5. E [X(t)C] = E [X(t)] C; C = constant matrix 6. E [X(t)Y (t)] = E [X(t)] Y (t); X(t) = random signal; Y (t) = deterministic signal 7. E [Y (t)X(t)] = Y (t)E [X(t)] ; X(t) = random signal; Y (t) = deterministic signal

42

3.4

Autocorrelation

We can dene another statistical quantity, namely a correlation matrix of the state matrix as follows: RX (t, ) = 1 N
N

Xi (t)XiT ( )
i=1

(3.2)

The two random states Xi (t) and Xi ( ) are obtained by observing a random process X(t) at times t and . The correlation matrix, RX (t, ) is a measure of a statistical property called correlation 1 among the dierent state variables, as well as between the same state variable at two dierent times. Assume that the two random variables are obtained at two dierent times say t1 and t2 . If the two time instants are very close, then we say that Xt1 and Xt2 are highly correlated. If the two time instants are widely separated, the correlation between the states will be less. For two scalar variables, x1 (t) and x2 (t), if the expected value of x1 (t)x2 ( ) is zero, i.e. E [x1 (t)x2 ( )] = 0, where is dierent from t, then x1 (t) and x2 (t) are said to be uncorrelated. Comparing eqs. (3.1) and (3.2), it is clear that the correlation matrix is the expected value of the matrix Xi (t)XiT ( ), or RX (t, ) = E Xi (t)XiT ( ) . When t = , the correlation matrix, RX (t, t) = E Xi (t)XiT (t) is called the covariance matrix. There are special stochastic systems, called stationary systems, for which all the statistical properties, such as mean value, Xm (t), and correlation matrix, RX (t, ) do not change with a translation in time, i.e. when time, t, is replaced by t + t1 . Hence for a stationary system, Xm (t + t1 ) = Xm (t) = C, (where C is a constant) and RX (t + t1 , + t1 ) = RX (t, ) for all values of t1 . Many stochastic systems are of interest are assumed to be stationary, which greatly simplies the statistical analysis of such systems.
The correlation (or autocorrelation or inter-relationship) between two random variables is a measure of the similarity between the variables.
1

CHAPTER 3. STOCHASTIC CONTROL

43

3.4.1

Properties

Let X(t) be the state of a stationary system. Then the autocorrelation function of X(t) has several properties. 1. RX (0) = E [X 2 (t)] Proof: We know that RX (t1 , t2 ) = E [X(t1 )X(t2 )] Let t2 = t1 + . Then RX ( ) = RX (t, t + ) = E [X(t)X(t + )] Put = 0 RX (0) = RX (t, t) = E X 2 (t) 2. The autocorrelation function RX ( ) is an even function. That is RX ( ) = RX ( ) Proof: RX ( ) = E [X(t)X(t + )] RX ( ) = E [X(t)X(t )] Let = t a. Then, RX ( ) = E [X(a + )X(a)] = RX ( ) 3. RX (0) |RX ( )| 4. The autocorrelation function of X(t) is a nite energy function. That is RX ( ) d <

5. If a random process X(t) has no periodic components and if E [X(t)] = 0, then lim RX ( ) = [E [X(t)]]2
| |

6. If X(t) has a periodic component, then the autocorrelation function RX ( ) will also have a periodic component with the same period.

CHAPTER 3. STOCHASTIC CONTROL

44

7. If a random process X(t) is ergodic, zero mean and has no periodic component, then lim RX ( ) = 0
| |

8. If there are two random processes X(t) and Y (t) such that Z(t) = X(t) + Y (t), then RZ ( ) = RX+Y ( ) = RX ( ) + RY ( ) + RXY ( ) + RY X ( ) 9. RX ( ) cannot have an arbitrary shape.

3.5

Crosscorrelation

Let [X(t); t 0], [Y (t); t 0] be two random processes. The cosscorrelation function R is dened as: RXY (t1 , t2 ) = E [X(t1 )Y (t2 )] (3.3)

3.6

Ergodicity

The expected value, Xm (t) and the correlation matrix RX (t, ) are examples of ensemble statistical properties, i.e. properties of an ensemble (or group) of N samples. Clearly, the accuracy by which the expected value, Xm (t), approximates the actual state-vector, X(t), depends upon the number of samples, N . If N is increased, the accuracy is improved. For a random system, an innite number of samples are required for predicting the state-vector, i.e. N = . However, we can usually obtain good accuracy with a nite (but large) number of samples. Of course, the samples must be taken in as many dierent situations as possible. For example, if we conned our sample of stray dogs to our own neighborhood, the accuracy of ensemble properties would suer. Instead, we should pick the dogs from many dierent parts of the town, and repeat our experiment at various times of the day, month and year. However, as illustrated by the stray dog example, one has to go to great lengths merely to collect sucient data for arriving at an accurate ensemble average. Finding an ensemble average in some cases may even be impossible, such as trying to calculate the expected value of annual rainfall in Kerala - which would require constructing N Keralas and taking the ensemble average of the annual rainfall recorded in all the Keralas. However, we can measure annual rainfall in Kerala for many years and take the time average by dividing the

CHAPTER 3. STOCHASTIC CONTROL

45

total rainfall by the number of years. Taking a time average is entirely dierent from taking the ensemble average, especially if the system is nonstationary. However, there is a sub-class of stationary systems, called ergodic systems, for which a time average is the same as the ensemble average (or expectation). E [X(t)] = lim 1 T 2T
T

X(t) dt
T

(3.4)

The symbol E denotes mathematical expectation, that is, integration with respect to the measure P 2 . In general it is very dicult to determine if a certain system in fact is ergodic experimentally or theoretically; measurements of the necessary quality are very dicult to carry out and even then the statistical interpretation of the experimental results is dicult. The ergodic assumption obviously has very great technical implications: what is not generally available in practice is a representative collection of sample functions from a very large collection of identically prepared systems. What is available is a certain sample function from one system on which one can make measurements over more or less limited observation periods. In general then ergodic properties must be considered to be a hypothesis. Ergodic conditions are nearly always assumed to obtain because of their great technical utility and because experience has shown that large errors do not seem to be made in this way. For those stationary systems that are not ergodic, it is inaccurate to substitute a time average for the ensemble average, but we still do so routinely to because there is no other alternative (there is only one Kerala in the world). Hence, we will substitute time averaged statistics for ensemble statistics of all stationary systems. For a stationary system, by taking the time average, we can evaluate the mean Xm and the correlation matrix RX ( ) over a large time period, T , as follows: 1 Xm = lim T 2T 1 RX ( ) = lim T 2T
T

X(t) dt
T T

(3.5) (3.6)

X(t)X T (t + ) dt
T

Note that since the system is stationary, the mean value Xm , is a constant, and the correlation matrix RX ( ), is only a function of time-shift, .
2

Probability measure

CHAPTER 3. STOCHASTIC CONTROL

46

3.7
3.7.1

Stationary Process
Strict Sense Stationary Process

A strict (or strong) sense stationary process is a stochastic process whose joint probability distribution does not change when shifted in time or space. Consequently, parameters such as the mean and variance, if they exist, also do not change over time or position.

3.7.2

Wide Sense Stationary (WSS) Process

A weaker form of stationarity commonly employed is known as wide (or weak or covariance) sense stationary process. WSS random processes only require that the 1st and 2nd moments do not vary with respect to time. Any strictly stationary process which has a mean and a covariance is also WSS.

3.8

Power Spectral Density

For frequency domain analysis of stochastic systems, it is useful to dene a power spectral density matrix (of a continuous time random process X(t)), SX () as the Fourier transform of the correlation matrix, RX ( ), given by

SX () =

RX ( )ei d

(3.7)

where is the frequency of excitation (i.e. frequency of an oscillatory input applied to the stochastic system). Thus, taking the inverse Fourier transform of SX (), we obtain RX ( ) = 1 2

SX ()ei d

(3.8)

The power spectral density matrix, SX (), is a measure of how the power of a random signal X(t), varies with the frequency .

3.8.1

Properties of SX ()

1. SX () is real and SX () 0 2. SX () = SX () 1 3. E[X 2 (t)] = RX (0) = 2


SX () d

CHAPTER 3. STOCHASTIC CONTROL

47

3.9

Gaussian Process

A stochastic process is said to be normal or Gaussian if the joint distribution3 of x(t1 ), x(t2 ), . . . , x(tk ) is normal for every k and all ti T, i = 1, 2, . . . , k. A normal process is completely characterized by the mean value mi = E [X(ti )] ; and the covariances rij = cov [x(ti ), x(tj )] = E [x(ti ) mi ] [x(tj ) mj ]T i, j = 1, 2, . . . , k If we introduce the vector m and matrix R dened by m1 m2 m= . . . mk r11 r12 r1k r21 r22 r2k R= . . . rk1 rk2 rkk (3.10) i = 1, 2, . . . , k (3.9)

(3.11)

(3.12)

where R is assumed to be nonsingular, we nd that the joint distribution of x(t1 ), x(t2 ), . . . , x(tk ) is f (X) = Here, 1 |R|
1/2 k/2

(2)

e[ 2 [Xm]R
1

1 [Xm]

(3.13)

X(t1 ) X(t2 ) X= . . . X(tk )

(3.14)

Thus a normal process is completely characterized4 by the mean value and the covariance (R) if it dened for all t1 , t2 , . . . , tk T .
3 4

joint pdf Kolmogorovs Theorem

CHAPTER 3. STOCHASTIC CONTROL

48

3.10

Markov Process

For a stochastic process with Markov property, roughly speaking, given the present, the future is independent of the past. If {X(t), t T } is a stochastic process such that, given the value X(t2 ), the values of X(t), t > t2 , do not depend on the values of X(t1 ), t1 < t2 , then the process is said to be a Markov Process. If the state-space is discrete in a Markov process, then it is called Markov Chain.

3.11

Gauss-Markov Process

A Gauss-Markov process can be represented by the state vector of a continuous linear dynamic system forced by a gaussian purely random process where the initial state vector is gaussian[5]. x(t) = F (t)x(t) + G(t)(t) where x(t) is an n-vector, is an m-vector. (3.15)

Figure 3.2: Representation of a Gauss-Markov process.

E[x(t0 )] = x(t0 ) E{[x(t0 ) x(t0 )][x(t0 ) x(t0 )] } = X(t0 )


T

(3.16) (3.17)

and (t) is a gaussian purely random process, with E[(t)] = (t) (3.18)

Figure 3.2 is a block diagram representation of the Gauss-Markov random process.

CHAPTER 3. STOCHASTIC CONTROL

49

3.12

Kalman Filter

The credit for inventing this algorithm is usually given to Dr. Rudolph E. Kalman, who presented the key ideas in the late 1950s and early 1960s. Although the Kalman lter is an alternative representation of the Weiner lter, Kalmans contribution was to tie the state estimation problem to the state-space models, and the (then new) concepts of controllability and observability.

Appendix A Fourier Series


A.1 Fourier Series

Fourier series is used to express a function using series of sines and cosines. Most of the single valued functions, can be expressed in the form 1 a0 + a1 cos x + a2 cos 2x + . . . 2 + b1 sin x + b2 sin 2x + . . .

A.2

Eulers Formula

The Fourier series for the function f (x) in the interval < x < + 2 is given by a0 + an cos nx + bn sin nx 2 n=1 n=1

f (x) = where

1 a0 = 1 an = 1 bn =

+2

f (x) dx
+2

f (x) cos nx dx
+2

f (x) sin nx dx

50

APPENDIX A. FOURIER SERIES

51

A.2.1

Other forms of Eulers formula


1 1 an = 1 bn = a0 =
2

Case I: = 0. The interval now becomes 0 < < 2. f (x) dx


0 2

f (x) cos nx dx
0 2

f (x) sin nx dx
0

Case II: = . The interval now becomes < < . 1 a0 = 1 an = 1 bn =

f (x) dx

f (x) cos nx dx

f (x) sin nx dx

A.3

Conditions for a Fourier expansion

1. f (x) is periodic, single-valued and nite. 2. f (x) has a nite number of discontinuities in any one period. 3. f (x) has at the most, a nite number of maxima and minima.

A.4 A.5

Functions having points of discontinuities Even and Odd functions


c c

A function f (x) is said to be even if f (x) = f (x) and odd if f (x) = f (x). f (x) dx = 2
c c

f (x) dx ; when f (x) is an even function. ; whenf (x) is an odd function.

=0

Case I: When f (x) is an even function a0 = 1 c


c

f (x) dx =
c

2 c

f (x) dx
0

APPENDIX A. FOURIER SERIES an = 1 c 1 bn = c nx 2 dx = c c c c nx f (x) sin dx = 0 c c f (x) cos


c c

52 f (x) cos
0

nx dx c

Hence, if a periodic function f (x) is even, its Fourier expansion contains only cosine terms. Case II: When f (x) is an odd function 1 a0 = c 1 an = c 1 bn = c
c

f (x) dx = 0
c c

nx dx = 0 c c c nx 2 f (x) sin dx = c c c f (x) cos

f (x) sin
0

nx dx c

Hence, if f (x) is odd, the expansion will contain only sine terms.

Appendix B Useful Formulas and Equations


B.1 Trigonometry
sin(A B) = sin A cos B cos A sin B cos(A B) = cos A cos B sin A sin B sin 2A = 2 sin A cos A cos 2A = 1 2 sin2 A = 2 cos2 A 1 A sin A cos A 90 A cos A sin A 180 A sin A cos A 270 A cos A sin A 360 A sin A cos A

sin cos

B.2

Calculus
cos nx n x sin 2nx sin2 nx dx = 2 4n sin nx cos nx dx = n sin nx dx =

53

References
[1] Hassan K. Khalil, Nonlinear Systems. [2] Katsuhiko Ogata, Discrete-Time Control Systems. [3] D. Roy Choudhary, Modern Control Engineering. [4] Norman S. Nise, Control Systems Engineering. [5] Arthur E. Bryson, Jr. and Yu-Chi Ho, Applied Optimal Control - Optimization, Estimation and Control. [6] B. S. Manke, Control System Design. [7] S. M. Tripathi, Modern Control Systems - An Introduction

54

Index
Autocorrelation, 42 Autonomous system, 8 Chaos, 6 Correlation, 42 Crosscorrelation, 44 Delta method, 15 Describing function analysis, 15 Deterministic system, 39 Equilibrium point, 9 Ergodicity, 44 Finite escape time, 5 Frequency entrainment, 6 Gaussian Process, 47 Ideal relay non-linearity, 22 Isocline method, 14 Jacobian Matrix, 10 Jurys test, 37 Limit Cycle, 6 Markov Chain, 48 Markov Process, 48 Multiple isolated equilibria, 5 Phase plane analysis, 12 Popovs Criterion, 30 Power Spectral Density, 46 Saturation non-linearity, 19 Sub-harmonic oscillations, 6 System model, 7 55

Potrebbero piacerti anche