Sei sulla pagina 1di 32

III.

The State Space Equations and Their Time Domain Solution Introduction This section of notes concentrates on three areas: 1. A general notation for the state variable treatment of dynamic systems (the so-called standard state space form). 2. Techniques for putting general systems into standard form. 3. Methods for the time domain solution of the state equations, including both analytical methods and numerical methods. Several examples are used to illustrate these techniques and a final numerical example in Matlab contrasts three different solution methods. This information is organized into the following subsections:

The State Space Representation o Example 3.1 Conversion to State Form o Nth Order Linear Differential Equations Example 3.2 Example 3.3 o Mixed Algebraic and Differential Equations Example 3.4 o Linearization of Nonlinear Systems Example 3.5 Analytic Solution to Linear Stationary Systems o Homogeneous Systems o Non-Homogeneous Systems o Time-Varying Systems Discretization of Linear Stationary Systems Numerical Integration of the State Equations An Example Illustrating Various Solution Schemes

The State Space Representation (Continuous Systems) The general state variable form for continuous dynamic systems can be written as
d x(t ) = Ax(t ) + f ( x, t ) + g (t ) dt

(3.1)

and
y (t ) = Cx(t ) + Du(t )

(3.2)

where f ( x, t ) contains all the variable coefficient and nonlinear terms. To complete this description, we note that the input, output, and state vectors do not generally contain the same number of elements. To treat this, one can write g (t ) as
g (t ) = Bu (t )

and allow the B , C , and D matrix operators to be rectangular matrices (i.e. non-square). The system matrix, A , is required to be square. A particular element of eqn. (3.1) can be written as
N K M d xi (t ) = a ij x j (t ) + f ik (x, t ) + bim u m (t ) dt j k m

(3.3)

and for eqn. (3.2), we have


yi (t ) = cij x j (t ) + dimum (t )
j m N M

(3.4)

where we have an Nth order system with M inputs and a maximum of K nonlinear terms for any one equation. Example 3.1 illustrates the use of eqns. (3.1 and 3.2) for a simple second order system. The most general form of eqn. (3.1) can be written as
d x(t ) = f ( x, u, t ) dt

(3.5)

where the vector function f(x,u,t) contains the full inter-relationship among all the state variables and inputs. The use of this form usually implies that only numerical simulation techniques will be used to study the system's dynamic behavior. Example 3.1 Standard State Space Form Problem Statement:

Put the following 2nd order system into standard state form:
d x1 (t ) = 7 x1 (t ) + 3 x2 (t ) + 4t x1 (t ) + x1 (t ) x2 (t ) u1 (t ) + 2u3 (t ) dt d 2 x2 (t ) = 9 x1 (t ) 5 x2 (t ) 3t x2 (t ) + 4u2 (t ) dt

and
y (t ) = x1 (t ) + x2 (t ) 2u2 (t )

Problem Solution: Putting this 2nd order system into standard state space form, gives
d x(t ) = Ax(t ) + f ( x, t ) + Bu(t ) dt

and
y (t ) = cT x(t ) +dT u(t )

where

u1 (t ) x1 (t ) x(t ) = ; u(t ) = u2 (t ) x2 (t ) u ( t ) 3

4t x1 (t ) + x1 (t ) x2 (t ) 7 3 1 0 2 A= ; f ( x , t ) = ; B = 2 0 4 0 3 x2 (t ) 9 5
and
cT = [1 1]; dT = [ 0 2 0]

Conversion to State Form The goal here is to present a recipe for the conversion of certain classes of problems into state form. Three specific areas to be addressed are: 1. State space representation of nth order systems, 2. Treatment of mixed algebraic and differential equations, and 3. Linearization of nonlinear systems.

Conversion of nth Order Linear Differential Equations (with derivatives) Given the following nth order system,
dn d n 1 d dn d n 1 d y ( t ) + a y ( t ) + + a y ( t ) + a y ( t ) = b u ( t ) + b u (t ) + + bn 1 u (t ) + bn u (t ) 1 n 1 n 0 1 n n 1 n n 1 dt dt dt dt dt dt (3.6)

let
x1 (t ) = y (t ) 0 u (t )

x2 (t ) =

d d d y (t ) 0 u (t ) 1u (t ) = x1 (t ) 1u (t ) dt dt dt

x3 (t ) =

d2 d2 d d y ( t ) u (t ) 1 u (t ) 2 u (t ) = x2 (t ) 2 u (t ) 0 2 2 dt dt dt dt

or, for the jth state variable


x j (t ) = d j 1 d j 1 d y ( t ) u (t ) j 1u (t ) = x j 1 (t ) j 1u (t ) 0 j 1 j 1 dt dt dt

(3.7) In these expressions, j is defined as


0 = b0

1 = b1 a1 0
2 = b2 a1 1 a 2 0

j = b j a1 j 1 a j 1 1 a j 0

(3.8)

Using eqns. (3.7) and (3.8), one gets the following SISO matrix representation of eqn. (3.6):
d x(t ) = Ax(t ) + bu (t ) dt
y (t ) = cT x(t ) + du (t )

(3.9) (3.10)

with

x1 1 x 2 x= ; b = 2 ; cT = [1 0 0]; d = 0 = b0 xn n

(3.11)

and
0 0 A = 0 an 1 0 an 1 1 0 0 1 a1

(3.12)

Example 3.2 illustrates the use of the above general conversion algorithm. Note that for the special case where the derivatives of the forcing function are not present (i.e. for RHS = u(t) only), we have bT = [0 0 1] and d = 0. This latter situation is actually the usual situation, and it is much easier to visualize and implement the conversion algorithm (since all the is are zero except n = 1). Example 3.2 Conversion of 3rd Order System to State Form (General Case) Problem Statement: Given
d3 d2 d d y ( t ) + 6 y (t ) 8 y (t ) + 4 y (t ) = 2 u (t ) + 7u (t ) 3 2 dt dt dt dt

convert this 3rd order linear stationary system to state form. Problem Solution: Using the recipe given previously, we have
x j (t ) = d x j 1 (t ) j 1u (t ) dt

with
x1 = y 0u = y

x2 =

dy du dy 0 1u = dt dt dt

x3 (t ) =

d2y d 2u du d2y u = 2u 0 1 2 dt 2 dt 2 dt dt 2

and

j = b j a1 j 1 a j 0

0 = b0 = 0

with
0 = b0
1 = b1 a10 = 0

2 = b2 a1 1 a2 0 = 2

3 = b3 a1 2 a2 1 a3 0 = 7 6(2) = 5

Therefore

x1 (t ) 0 1 0 x1 (t ) 0 d x2 (t ) = 0 0 1 x2 (t ) + 2 u (t ) dt x3 (t ) 4 8 6 x3 (t ) 5
and

x1 (t ) y (t ) = [1 0 0] x2 (t ) + [ 0]u (t ) x3 (t )
Example 3.3 illustrates this less general, but very common case. It also highlights the importance of the state matrix in determining the solutions to linear stationary systems. We will focus further on the properties of the state matrix in later subsections. Example 3.3 Conversion of 3rd Order System to State Form (Usual Case) Problem Statement: Given
d3 d2 d y ( t ) + a y (t ) + a2 y (t ) + a3 y (t ) = u (t ) 1 3 2 dt dt dt

convert this system to state form, noting that the RHS does not contain derivatives of u(t). Also show that the eigenvalues of the resultant system matrix are identical to the roots of the characteristic equation associated with the homogeneous solution to the given 3rd-order equation. Problem Solution:

For this case, let's make the following substitutions:


x1 = y

= x2 = y = x3 = y or, in general,
x j (t ) =

dx1 1 =x dt dx2 2 =x dt

dx j 1 (t ) dt

with
x1 = y

= Now, from the defining equation with y

dx3 3 , we have =x dt

d x3 (t ) = a1 x3 a2 x2 a3 x1 + u dt
1 , x 2 , and x 3 can be put into matrix form, giving Therefore, the three equations for x

x1 (t ) 0 d x2 (t ) = 0 dt x3 (t ) a3

1 0 a2

0 x1 (t ) 0 1 x2 (t ) + 0u (t ) a1 x3 (t ) 1

Also, since the solution of interest is y(t) (a single output), the general expression for the output equation reduces to

y (t ) = [1 0

x1 (t ) 0] x2 (t ) + [ 0]u (t ) x3 (t )

This case is very similar to that given in Ex. 3.2, except now, with only u(t) on the RHS of the defining ODE, the b vector can be written as b = [ 0 0 1]T . In addressing the second part of the problem, recall from Section II that the characteristic equation associated with the homogeneous form of the given equation can be written as
3 + a12 + a2 + a3 = 0

Now, the eigenvalues of the state matrix are given by


det ( A I ) = 0 = 0 a3 1 a2 0 1 a1

and expanding along row 1 gives


det ( A I ) = (2 + a1 + a2 ) 1(a3 ) = 3 + a12 + a2 + a3 = 0

This is exactly the same result as above. The eigenvalues of the state matrix for a linear stationary system are identical to the roots of the characteristic equation of the original linear constant coefficient ODE. Thus, det ( A I ) = 0 is also referred to as the characteristic equation. Conversion of Mixed Algebraic and Differential Equations The general formulation in eqn. (3.1) has only one derivative term in each equation. This is the standard form. However, a linear system can have more than one derivative per equation. For example, consider the following 2nd order system,
d d x1 (t ) + x2 (t ) + 4 x1 = u (t ) dt dt d x2 (t ) 3 x1 (t ) 4 x2 = 0 dt

Putting these expressions into matrix form gives


1 1 d x1 (t ) 4 0 x1 1 0 1 dt x (t ) = 3 4 x + 0u (t ) 2 2

Thus, a more general form for the state equations for linear stationary systems is
E d x(t ) = Ax(t ) + Bu(t ) dt

(3.13)

Now to put this into standard form, we simply multiply by E1, giving
d x(t ) = E 1A x(t ) + E 1B u(t ) dt

(3.14)

Where (E 1A ) now becomes the system matrix. Another common situation that occurs is to have a complex dynamic system that is described by a set of mixed algebraic and differential equations. Such a system is said to have embedded statics since the algebraic equations represent relationships that do not change with time.

In general, systems with embedded statics can be reduced in order (by the number of algebraic equations) via a series of algebraic manipulations. A general procedure for doing this is as follows: Step 1. Order the equations so that n1 equations contain derivative terms and the next n2 equations only have algebraic relationships Step 2. Write the original equations using partitioned matrices (with g = Bu), giving E11 1 d xd (t ) C11 C12 xd (t ) g d + 0 1 dt x (t ) = C a 21 C22 xa (t ) g a where xd = vector containing elements that have derivative terms xa = vector containing elements that appear without derivatives Step 3. Expand the matrix expression in Step 2,
E11 d xd = C11xd + C12 x a + g d dt
0 = C21x d + C22 xa + g a

Step 4. Solve the algebraic expressions for xa in terms of xd and substitute into the differential equation
1 xa = C 22 (C 21x d + g d )

and
E11 d 1 1 xd = C11xd C12C 22C 21x d C12C 22 g a + g d dt

Step 5. The new system written in standard form is


d xd = Axd + w dt

(3.15A)

with
1 1 (C11 C12C A = E11 22C 21 )
1 1 w = E11 g d C12C 22 g a

(3.15B) (3.15C)

Example 3.4 gives a simple, but illustrative, demonstration for dealing with systems with embedded statics.

Example 3.4 Treating Mixed Algebraic and Differential Equations Problem Statement: Put the following system in standard state form:

Problem Solution: Let's write these equations in matrix form and partition them accordingly,

This is now in the form

which has solution

where

and

Linearization of Nonlinear Systems In general, all systems are nonlinear. However, linear analysis methods can usually be used for systems analysis in some limited range around some reference state. The goal here is to develop a method for linearizing the general nonlinear state variable equations. The variable coefficient terms in the original equations are also often treated in a similar manner as the nonlinear terms. This is done because most of the analytical techniques are applicable only for linear stationary systems. A general nonlinear nonstationary system can be written as

where all the nonlinear and variable coefficient terms are included within . Expressing the dependent state variables and independent forcing function as deviations from some normal operating state, , gives

Substituting these expressions into eqn. (3.16) gives eqn. (3.18) as follows:

The nonlinear nonstationary terms can be written in a first-order Taylor series expansion about the operating point. For the ith component of the vector, the first-order approximation gives

The last two summations simply represent matrix multiplication operations. Thus, eqn. (3.19) written in matrix notation gives eqn. (3.20), or

where are referred to as Jacobian matrices (evaluated at the reference operating point). These matrices can be represented symbolically as

For example,

written out in detail is given as

Note also that at the reference operating point, one has

Now, substitution of eqns. (3.20) and (3.22) into eqn. (3.18) gives

This latter expression represents a linearization of the original nonlinear nonstationary system. The new system matrix for the linearized system becomes matrix operator becomes and the new input

. Note that these matrices are functions of the initial

reference operating point, . This is true since we have linearized about this condition. Example 3.5 illustrates the use of the above expressions. Note also that the perturbation form given in eqn. (3.23) is often useful even if the original system is linear and stationary, since, by definition, the initial conditions for the new state vector, , are zero. Conversion to State Form The goal here is to present a recipe for the conversion of certain classes of problems into state form. Three specific areas to be addressed are: 1. State space representation of nth order systems, 2. Treatment of mixed algebraic and differential equations, and

3. Linearization of nonlinear systems. Conversion of nth Order Linear Differential Equations (with derivatives) Given the following nth order system,

let

or, for the jth state variable

or

In these expressions,

is defined as

Using eqns. (3.7) and (3.8), one gets the following SISO matrix representation of eqn. (3.6):

with

and

Example 3.2 illustrates the use of the above general conversion algorithm. Note that for the special case where the derivatives of the forcing function are not present (i.e. for RHS = u(t) only), we have and d = 0. This latter situation is actually the usual situation, and it is much easier to visualize and implement the conversion algorithm (since all the are zero except ). Example 3.3 illustrates this less general, but very common case. It also highlights the importance of the state matrix in determining the solutions to linear stationary systems. We will focus further on the properties of the state matrix in later subsections. Conversion of Mixed Algebraic and Differential Equations The general formulation in eqn. (3.1) has only one derivative term in each equation. This is the standard form. However, a linear system can have more than one derivative per equation. For example, consider the following 2nd order system,

Putting these expressions into matrix form gives

Thus, a more general form for the state equations for linear stationary systems is

Now to put this into standard form, we simply multiply by

, giving

where

now becomes the system matrix.

Another common situation that occurs is to have a complex dynamic system that is described by a set of mixed algebraic and differential equations. Such a system is said to have embedded statics since the algebraic equations represent relationships that do not change with time. In general, systems with embedded statics can be reduced in order (by the number of algebraic equations) via a series of algebraic manipulations. A general procedure for doing this is as follows: Step 1. Order the equations so that n1 equations contain derivative terms and the next n2 equations only have algebraic relationships Step 2. Write the original equations using partitioned matrices (with ), giving

where vector containing elements that have derivative terms vector containing elements that appear without derivatives Step 3. Expand the matrix expression in Step 2,

Step 4. Solve the algebraic expressions for differential equation

in terms of

and substitute into the

and

Step 5. The new system written in standard form is

with

Example 3.4 gives a simple, but illustrative, demonstration for dealing with systems with embedded statics. Linearization of Nonlinear Systems In general, all systems are nonlinear. However, linear analysis methods can usually be used for systems analysis in some limited range around some reference state. The goal here is to develop a method for linearizing the general nonlinear state variable equations. The variable coefficient terms in the original equations are also often treated in a similar manner as the nonlinear terms. This is done because most of the analytical techniques are applicable only for linear stationary systems. A general nonlinear nonstationary system can be written as

where all the nonlinear and variable coefficient terms are included within . Expressing the dependent state variables and independent forcing function as deviations from some normal operating state, , gives

Substituting these expressions into eqn. (3.16) gives eqn. (3.18) as follows:

The nonlinear nonstationary terms can be written in a first-order Taylor series expansion about the operating point. For the ith component of the vector, the first-order approximation gives

The last two summations simply represent matrix multiplication operations. Thus, eqn. (3.19) written in matrix notation gives eqn. (3.20), or

where are referred to as Jacobian matrices (evaluated at the reference operating point). These matrices can be represented symbolically as

For example,

written out in detail is given as

Note also that at the reference operating point, one has

Now, substitution of eqns. (3.20) and (3.22) into eqn. (3.18) gives

This latter expression represents a linearization of the original nonlinear nonstationary system. The new system matrix for the linearized system becomes matrix operator becomes and the new input

. Note that these matrices are functions of the initial

reference operating point, . This is true since we have linearized about this condition. Example 3.5 illustrates the use of the above expressions. Note also that the perturbation form given in eqn. (3.23) is often useful even if the original system is linear and stationary, since, by definition, the initial conditions for the new state vector, , are zero. Example 3.5 Linearization of a 2nd Order Nonlinear System Problem Statement: Write a linear approximation for the system described by

Take the reference state as the equilibrium point (i.e. Problem Solution: Since we must expand the system around this case, one has

) with

, we first must identify the reference state. In

or

and

Rearranging these expressions gives

and

or

Therefore, the two equilibrium states are

and

The Jacobian matrix associated with the state vector can be expressed as (note that this case)

for

For the reference state with

, the linearized system is

and for

, the linearized system is given as

These linear stationary systems represent linear approximations to the original system (about different initial states). Linearization of Nonlinear Systems

In general, all systems are nonlinear. However, linear analysis methods can usually be used for systems analysis in some limited range around some reference state. The goal here is to develop a method for linearizing the general nonlinear state variable equations. The variable coefficient terms in the original equations are also often treated in a similar manner as the nonlinear terms. This is done because most of the analytical techniques are applicable only for linear stationary systems. A general nonlinear nonstationary system can be written as

where all the nonlinear and variable coefficient terms are included within . Expressing the dependent state variables and independent forcing function as deviations from some normal operating state, , gives

Substituting these expressions into eqn. (3.16) gives eqn. (3.18) as follows:

The nonlinear nonstationary terms can be written in a first-order Taylor series expansion about the operating point. For the ith component of the vector, the first-order approximation gives

The last two summations simply represent matrix multiplication operations. Thus, eqn. (3.19) written in matrix notation gives eqn. (3.20), or

where are referred to as Jacobian matrices (evaluated at the reference operating point). These matrices can be represented symbolically as

For example,

written out in detail is given as

Note also that at the reference operating point, one has

Now, substitution of eqns. (3.20) and (3.22) into eqn. (3.18) gives

This latter expression represents a linearization of the original nonlinear nonstationary system. The new system matrix for the linearized system becomes matrix operator becomes and the new input

. Note that these matrices are functions of the initial

reference operating point, . This is true since we have linearized about this condition. Example 3.5 illustrates the use of the above expressions. Note also that the perturbation form given in eqn. (3.23) is often useful even if the original system is linear and stationary, since, by definition, the initial conditions for the new state vector, , are zero.

Discretization of Linear Stationary Systems As just discussed, the computer implementation of any potential solution scheme is usually a prerequisite for the analysis and simulation of realistic systems. In general there are two good approaches used in computer simulations of high order dynamic systems. These include the discretization of the analytic solution scheme for linear stationary systems and general numerical integration techniques that can be adapted to any system of interest (i.e. nonstationary and non-linear). The basic goal of the discretization process is to convert the standard continuous linear stationary state equations to discrete form. That is, given

convert the continuous system into a difference equation of the form

where T is the sampling period and (3.44) is often written simply as

and

are constant matrices. For convenience, eqn.

These expressions represent a discrete recurrence relationship that can be evaluated quite easily on the computer if the and expressions for these matrices. matrices are known. Our goal here is to derive explicit

To accomplish this transformation, we must make an assumption concerning the behavior of . In fact, the only assumption in our development is that piecewise constant function of time. Therefore, forcing function is constant over the kth interval. can be written as a , where it is implied that the

To derive the discrete form, recall that the solution to the continuous linear stationary problem is

Letting t = (k+1)T and t0 = kT, one has

Introducing a new variable

such that

or

with

and

one has

Since

our final result is

By comparison with eqn. (3.45), explicit expressions for the

and

matrices are

and

Another form for may be more appropriate if one plans to use the infinite series expansion for generating a numerical result for the matrix exponential. Recall that

If

is expanded in a similar manner, one has

with

Note that

is similar to

and they both can be generated within the same routine with little can be found via eqns. (3.48) or (3.49).

additional effort. Thus,

In either case, once the matrices and have been generated, the numerical simulation of the system can proceed using the recursion relation given in eqn. (3.45). Note that this method gives an exact solution assuming is indeed constant within the sampling interval T.

The algorithm implied by eqns. (3.46) - (3.50) is relatively straightforward to implement on the computer. Matlab's implementation of the discrete solution for LTI systems is roughly similar to this algorithm. Numerical Integration of the State Equations An alternate solution technique to using the matrix exponential approach is to simply integrate the defining state equations. Our goal will be to give a simple illustration that introduces the proper terminology and describes the basic concept behind all one-point numerical integration methods. The advantage of numerical integration methods is that they can easily handle time-varying systems and relatively complicated nonlinear problems. As a brief introduction to numerical integration, we will look at the two simplest methods available; the Euler and Modified Euler methods. Although these methods present the basic concepts, they are quite inefficient relative to some of the more advanced techniques. The methods presented here are intended only as an illustration of the basic methodology, and these techniques are not normally implemented as derived. Euler Method (Scalar Case) As a starting point, consider the general first-order differential equation

In the Euler method, one assumes that the right hand side of eqn. (3.51) is constant over some small time interval . With this condition, integrating eqn. (3.51) gives

or

Thus, the problem has been converted to a recurrence relation for xk+1 in terms of xk and the function evaluated with the solution at time tk. This is a relatively crude approximation but, if is chosen sufficiently small, this simple scheme can give reasonable results. Modified Euler Method (Scalar Case) A somewhat better estimate for integrating the defining equation over some specified is to assume that f(x,u,t) varies linearly over the given step. This gives the following recurrence relation,

The problem with eqn. (3.53) is that we need xk+1 to calculate fk+1. A possible solution is to use the Euler method to get a first guess for xk+1 and fk+1 and then use this guess to get a better estimate of xk+1. This crude Predictor-Corrector scheme can be summarized as follows: Predictor Step

Corrector Step

where

with x'k+1 representing a first estimate (prediction) of the value for the dependent variable at tk+1 and xk+1 being the best estimate (correction) for this time point.

The modified Euler method is a simple, but illustrative example of a predictor-corrector numerical integration scheme. Other refinements such as adaptive control/updating of the integration step, , can be done by addressing the error in the predictor and corrector steps. In most current algorithms, one only inputs a desired tolerance level and a variable integration step is computed that will satisfy the specified error criterion. The above schemes for the scalar problem can be easily generalized for solution of large systems of equations. In this case we desire the solution of a general system of first order equations,

By analogy to the scalar problem, one has the following expressions: Euler Method (Matrix Case)

where

Modified Euler Method (Matrix Case) Predictor Step

Corrector Step

where

The most important part of the above simple schemes is the choice of an appropriate time interval, . For the solution of several state equations, the choice of becomes even more important and difficult, since some state variables may vary rapidly while others are slowly varying functions of time. The more advanced schemes usually address this concern by automatically adjusting as needed to meet user specified accuracy requirements. Thus,

"Adaptive Predictor-Corrector" methods are the norm for performing realistic time domain simulations. Example 3.6 An Example Illustrating Various Solution Schemes A general 2nd order linear stationary system can be given as

Using the general recipe for converting nth order systems into state form, one has

where

Also,

where x1(0) and x2(0) represent the initial conditions for the system. As a specific example, consider the simple series RLC circuit shown in Fig. 3.1.

From Kirchoffs voltage law, we have

where ea(t) is the applied voltage and the voltage drops across the individual components are given by

where i(t) is the current in the loop. Substituting these relationships into the balance equation gives

This combined differential and integral balance equation is put into standard differential form by differentiating each term, or

Comparing this system to the general representation for a 2nd order LTI system, we have

Thus, the standard state space representation for this specific system is

where

and

and u(t) is the input voltage, ea(t), and y(t) is the resultant current, i(t), versus time. For specificity in the numerical solutions, lets assume the following: 1. For the reference case, R = 100 ohms, L = 0.1 henry, and C = 0.001 farad.

2. Let ea(t) be a step input of 10 volts. Thus, u(t) = 10 volts for all t > 0. 3. The sensitivity of i(t) to changes in these parameters can also be addressed. 4. To illustrate the various solution techniques, lets perform a time domain simulation for this system using three different methods: A. Continuous analytical solution using the closed form representation of B. Discretization of the LTI system C. Numerical integration of the state equations All three solution schemes were implemented in Matlab within files LTIDEMO1.M and SSEQN1.M. A brief overview of each method is given below. Solution Method A For a step input, the analytical solution for a LTI system can be written as

This expression can be simplified as follows:

or

where, in the last manipulation, we used that fact that

and

commute. . This

To implement this last expression we used Sylvesters Theorem to evaluate required that the eigenvalues be computed from

All other matrix manipulations were completed directly within Matlab. The time domain solutions for the nominal set of parameters given above and for the case where R = 10 ohms are plotted in Figs. 3.2 and 3.3, respectively. Note that R = 100 ohms gives and overdamped response and an underdamped solution is obtained for R = 10 ohms. This behavior can be explained simply by computing the eigenvalues for the two cases.

Solution Method B The discrete solution of the LTI state equations is incorporated as part of Matlabs Control Toolbox. This toolbox has a whole series of functions for the analysis of linear stationary systems using the same terminology developed in these notes. In Matlab 5.0 and higher a new object-oriented approach to working with systems was implemented. In particular, Matlab defines an LTI state-space object via knowledge of the matrices from the standard state-space representation. For this situation the ss command is used as follows (see Matlabs help file for function ss): sys = ss(A,B,C,D) This creates an LTI object named sys that can be manipulated within several other Matlab functions. For the case of a unit step input, for example, one has [Y,T,X] = step(sys) or [Y,T,X] = step(sys,T) where Y and X contain the output and state vector time domain responses in a 3-D array format. The number of rows in Y and X is determined by the number of time points in the equally-spaced time vector T. The Y array has as many columns as outputs and X has as many columns as states (i.e. the order of the system). Finally, the number of pages (the length of the 3rd dimension) is equal to the number of inputs to the system. Thus, there is a 2-D matrix for each system input. In the first form given above, the sample period and the number of time points in the T vector is determined automatically by Matlab. In the second form, the T vector is specified by the user and passed into Matlabs step function. Note that Matlab has similar functions, impulse and lsim, respectively, if the time domain input function is a unit impulse or a general input vector. For the specific RLC circuit of interest in this example, the appropriate calling sequence is sys = ss(A,10*B,C,D); [Y,T,X] = step(sys,T); since there is only a single input quantity and its strength is 10 units (and the default is for a unit step input). The Matlab m-file that implements the step function actually call two other m-files. The first one, c2d, converts the continuous state space system into a discrete state space representation by computing the and matrices given in eqns. (3.47) and (3.48) using Matlabs built-in matrix exponential routine, expm, with a sampling time determined by the constant interval width in vector T. It then calls ltitr (Linear Time Invariant Time Response), which simply implements the recursive expression given in eqn. (3.45).

As expected, the solutions using Method B are identical with those from Method A. The student is referred to the second part of LTIDEMO1.M for the details of the Matlab implementation, and to Figs. 3.2 and 3.3 for displays of the output responses using Method B. Solution Method C The last method to be highlighted in this example is the numerical integration technique implemented with Matlabs ode23 routine. This routine applies an adaptive step control algorithm by obtaining error estimates using two Runge Kutta (RK) predictions of different order. The ode23 function uses a 2nd and 3rd order RK set for medium accuracy. An overview of how to use the ode23 routine in Matlab can be obtained by simply typing help ode23 at the Matlab prompt (this procedure is the best way to learn how to use any Matlab function). In the current case, the call to ode23 should resemble the following, [T,X] = ode23(sseqn1,[to,tf],xo,options); where to and tf are the initial and final integration times, xo represents the initial condition of the state vector, and options is an ODE options structure within Matlab that allows the user to control several options within the ODE numerical integration routine (see function odeset for the various options that can be adjusted). The default options are satisfactory for many cases, except for possible control of the user-specified error tolerance. The outputs from ode23 includes a time vector containing the discrete time points where the state is evaluated, and the state vector in a matrix whose columns are the various elements of the state versus time. Note there that the time vector is determined automatically within ode23 and it is usually not evenly spaced because of the adaptive step control algorithm that is used. The function name, sseqn1, points to the function routine that evaluates that right hand side of the state balance equation at each time point. This routine is supplied by the user, and it can be specialized for any particular case of interest. For the current RLC series circuit simulation with a step input, this function is quite straightforward as seen in SSEQN1.M. The results from Method C, as expected, are identical to those from the other techniques. This can be seen by comparing the results in Figs. 3.2 and 3.3. In the current example, Method B, the discrete solution method, is clearly the easiest to use since the problem fits exactly into the LTI class of problems, and Matlabs ss, step, impulse and lsim routines were designed to easily treat LTI systems. For nonlinear or variable coefficient systems, the numerical integration technique with the ode23 routine would be used (with some modification to SSEQN1.M).

Potrebbero piacerti anche