Sei sulla pagina 1di 10

ANALYSIS AND CONTROL OF COUPLED SLOW AND FAST SYSTEMS: A REVIEW

Zvi Artstein 1
1

The Weizmann Institute of Science, Rehovot, Israel, zvi.artstein@weizmann.ac.il

Abstract: We review the modeling and analysis of coupled fast and slow dynamical systems including dynamics depicted by differential equations and by control systems. We review, in particular, a novel approach, aiming at relaxing some classical assumptions. The techniques employed by the new approach utilize evolution of probability distributions each forms an invariant measure of the dynamics. Some examples and relevant references are displayed. Keywords: Singular perturbations, optimal control problems, Young measures, invariant measures. 1. INTRODUCTION When a small parameter appears in an ordinary differential equation, or in a control system, it may give rise to fast dynamics interwoven with a slow one. Of interest then are the limit characteristics of the system as the small parameter tends to zero. Several mathematical models have been offered during the many years of research in the subject, and the resulting equations have been extensively examined on both the abstract theoretical level and in conjunction with a large number of important applications, including concrete applications in the Physical Sciences and in Engineering. In this review we concentrate on the modeling issue and the underlying structural assumptions that make the different models work, and display the main claim in each model. We display some classical models, and some new developments by the author and collaborators, including some work yet in progress. This introduction is followed by six sections, depicting progress in the modeling and analysis within ordinary differential equations and within control systems. The rst two sections recall the Tikhonov model for singularly perturbed ordinary differential equations and the corresponding Kokotovic model for control systems. While these models cover indeed a very large number of important applications, and therefore have been used in a tremendous number of applica Incumbent of the Hettie H. Heineman Professorial Chair in Mathematics. Research supported by the Israel Science Foundation.

tions, they employ two underlying assumptions that can, to some extent, be dispensed with. One is that the fast dynamics converges, on the fast time scale, to an equilibrium. We show in Sections 4 and 5 how to relax this condition within ordinary differential equations and for control systems. The second assumption that we want to relax is the existence of slow-fast state variables that correspond to slow and fast equations. In Section 6 we report on some recent work that analyzes singularly perturbed systems without a split to slow and fast state variables; in the closing section we report on research in progress that carries out a similar program within control systems. 2. ODES: THE TIKHONOV MODEL The general dynamics we examine evolves in the Euclidean space Rn ; we denote the state in Rn by x. The dynamics occurs on a time interval, say [0, T ]. The mathematical equation that governs the evolution should include a small parameter, say , that appears as a singular perturbation and that gives rise to fast dynamics. Recall that we are interested in the description of the limit of the dynamics as 0. The classical model of handling singularly perturbed ordinary differential equations was developed by Andrei Nikolaevich Tikhonov in the 1940s; the contributions of Tikhonov and his school was followed by many other contributions since then. The model assumes that the state x is split into two subspaces, say x = (y, z ) with y Rn1 , z Rn2 , such that the small parameter affects the derivative of z only. For > 0 xed, the ordinary differential equations is, typically, given as follows. System 2.1. Consider dy = f (y, z ) dt dz = g (y, z ) dt y (0) = y0 , z (0) = z0 .

(2.1)

Proceedings of the 9th Brazilian Conference on Dynamics Control and their Applications Serra Negra, SP - ISSN 2178-3667

1254

The components y and z of x are then referred to as the slow and, respectively, the fast states, for obvious reasons. Some technical conditions need to be imposed on the right hand side of (2.1) to make it amenable to analysis. We do not elaborate on such conditions in this review and assume, say, that f (, ) and g (, ) are Lipschitz continuous functions. The system need not be time-invariant, namely, the functions in the right hand side of (2.1) may depend on the time variable t as well; then, as is standard in the study of ordinary differential equations, the time variable t can be incorporated into d the slow state via the agreement that dt t = 1. The solution to (2.1) depends on the parameter > 0; we denote it by (y (), z ()). (2.2) We are interested in the limit behavior of the trajectories (2.2) as 0. Note that unless some assumptions are placed on the behavior of the solutions as 0, the fast variable z (t) may posses a slope tending to , thus with no apparent limit. To this end the Tikhonov approach places a crucial assumption on the system (2.1) as follows. Assumption 2.2. For every y there is a point z (y ) that is an asymptotically stable equilibrium of the differential equation dz = g (y, z ), ds (2.3)

Observation 2.4. Under Assumption 2.2, for small the manifold (y, z (y )) in Rn = Rn1 Rn2 slaves the solutions (y (), z ()) of (2.1) in the following sense. The fast solution z (t) converges on a short time interval to a neighborhood of the point z (y0 ); on nite time intervals past the short initial time interval (y (t), z (t)) evolves closely to the solution (y0 (t), z0 (t)) of (2.4) (the initial fast motion is called a boundary layer). Figure 1 depicts the elements participating in the limit solution according to the previous observation.

interpreted as a differential equation for z when y is held xed (note that we use a variable s, not identical with t, to serve in (2.3)). In particular, solutions z () of (2.3) converge to z (y ), uniformly on bounded sets in Rn2 . Furthermore, the mapping z (y ) : Rn1 Rn2 is Lipschitz. (For many applications it is enough to assume the asymptotic stability in a neighborhood of z (y ).) Under Assumption 2.2 we can display a system whose solutions are the limits as 0, in a sense described below, of the solutions (2.2). Limit System 2.3. Consider the algebraic-differential equation dy = f (y, z ) dt 0 = g (y, z ) (y0 ). y (0) = y0 , z (0) = z We need to explain what a solution of (2.4) means: It has the form (y0 (t), z0 (t)) where y0 (t) is obtained by solving dy = g (y, z (y )), y (0) = y0 dt (2.5)

Figure 1 There are excellent texts and textbooks that state the previous observation in a fully rigorous manner and verify it. The setting and the theorem are typically named after A. Tikhonov, whose initial contributions, see Tikhonov [53], [54], were followed by an enormous number of developments. Early contributions have also been done by N. Levinson, see [42]. A sample of monographs where the reader can nd, among other related issues, a complete account of the Tikhonov model along with many examples and applications is: OMalley [46], Smith [51], Tikhonov et al. [55], Verhulst [59], Wasow [63]. See also Volosov [61], Vasilva [58], and there are many more, among them many recent, contributions utilizing and rening this approach. It should ne emphasized that although Assumption 2.1 imposes a strong restriction on the coupled dynamics, many important applications are covered by this setting, as can easily be found in the vast literature on singular perturbations. Furthermore, the setting and the observation have been extended far beyond ordinary differential equations on a nite time, including analysis on innite time, extensions to other types of equations, etc. 3. CONTROL SYSTEMS: THE KOKOTOVIC APPROACH The Tikhonov model described in the previous section has been modied within a control framework, in the search for control design and in solving optimal control problems, when slow and fast dynamics are coupled. The prime contributions have been offered by Petar Kokotovic and his school. The

(2.4)

(y0 (t)). and then z0 (t) = z With Assumption 2.2 the system (2.4) indeed depicts the limit behavior of solutions of (2.1) according to the following description (we, intentionally, use the title Observation for the next result rather than, say, Theorem, since we prefer to employ an intuitive language).

Proceedings of the 9th Brazilian Conference on Dynamics Control and their Applications Serra Negra, SP - ISSN 2178-3667

1255

basic model assumes, again, that the state x is split into two subspaces, say x = (y, z ), such that the small parameter affects the derivative of z only. In this section we concentrate on an optimal control issue, that for > 0 xed is given as follows. Optimal Control Problem 3.1. Consider minimize subject to
T 0

the Tikhonov limit system (2.4) (now with explicit dependence on time, that is allowed in (2.4) as explained in the previous section) with z (y, u0 (t)) the solution of the algebraic equation in (3.3). Thus, for a given u0 () the trajectory of (3.3) is of the form (y0 (t), z0 (t), u0 (t)) that is determined by the equation dy = g (y, z (y ), u0 (t)), y (0) = y0 dt (3.4)

c(y (! t), z (t), u(t))dt

dy = f (y, z, u) dt dz = g (y, z, u) dt y (0) = y0 , z (0) = z0 .

(y0 (t), u0 (t)). The optimization problem and then z0 (t) = z is solved then if the resulting cost
T

(3.1)

c(y0 (t), z ((y0 (t), u0 (t))), u0 (t))dt

(3.5)

The components y and z of x are as in the Tikhonov model and u is the control variable that can be chosen in a constraint set U , say U Rk . (Considerations similar to what follows are valid for, say, design of stability, or robustness, without considerations of the cost.) Some technical conditions need to be imposed on the right hand side of the system (3.1) to make the dynamics well dened when a control function, say a measurable mapping u(t) : [0, T ] U , is plugged in. Again, we do not elaborate on such conditions in this review and assume, say, that f (, , ), g (, , ) and the cost function c(, , ) are Lipschitz continuous functions in y and z and continuous in u. As in the previous section, dependence on time is allowed, but suppressed in this review. A solution of (3.1) is a control function, u (t), that may be given in a feedback form u (y, z, t), that gives rise to the state evolution; the combined state-control trajectory has the form (3.2) (y (), z (), u ()). We are interested in the limit behavior as 0 of the optimal control, and the induced dynamics. The description of the Kokotovic approach is done in reverse order of the presentation of the previous section; here we rst display the limit system, whose solution will be the limit in a sense described below of the optimal control problem; only then we reveal the underlying assumptions that allows the method to work. Limit Optimal Control Problem 3.2. Consider the algebraic-differential control equation minimize subject to
T 0

is minimal. But this is not enough (since the z -initial condition may not be on the manifold determined by the algebraic equation in (3.3), namely, z0 may not be equal to z (y0 , u0 (0)). Hence, the control u0 (t) should be embedded" in a control, say, a feedback control, u(y, z, t) satisfying u0 (t) = u(y0 (t), z0 (t), t) and such that when u(y, z, t) is plugged into the differential equations in (3.1) the resulting ordinary differential equation satises the Tikhonov Assumption 2.2 with z (y, t) satisfying z (y0 (t), t) = z0 (t). To sum up: To solve (3.3) in the Kokotovic sense means to come up with a control function u0 (t) embedded in a control feedback u(y, z, t) as described, and to do that in an optimal way, namely, achieving the least possible cost among such controls. Observation 3.3. When the feedback u(y, z, t) is plugged into the terms appearing in Optimal Control Problem 3.1, the resulting dynamics of (3.1) is slaved by the solution (y0 (t), z0 (t), u0 (t)) of (3.3), and furthermore, the cost (which depends on ) of invoking the feedback, converges as 0 to the optimal cost of the Limit Optimal Control Problem 3.3. The reasoning behind the previous observation is identical to the arguments relating to Observation 2.4; indeed, the only additional item is the convergence of the cost that, indeed, can easily be justied. We provide references that rigorously verify the observation momentarily, but rst display the assumption. Note that the previous Observation guarantees only that the cost of applying the limit control strategy to the original system (3.1) converges to the optimal cost of (3.3). It does not guarantee that one cannot do better as 0 by utilizing another strategy. To this end we assume the following. Assumption 3.4.

c(y (t), z (t), u(t))dt (3.3)

dy = f (y, z, u) dt 0 = g (y, z, u) y (0) = y0 .

The system (3.1) has the property that the -dependent optimal cost converges as 0 to the optimal cost of (3.3). Only under Assumption 3.4 it makes sense to follow the Kokotovic approach when addressing optimal control problems. It is true, however, that in the typical and important applications that arise in real life Engineering, the assumption is satised. Furthermore, the literature offers a variety of technical mathematical conditions that if satised by (3.1)

We describe now what does it mean to solve (3.3). First, a control function u0 () is ought to be found. With a given u0 (t) the differential equations is (3.3) have the structure of

Proceedings of the 9th Brazilian Conference on Dynamics Control and their Applications Serra Negra, SP - ISSN 2178-3667

1256

guarantee that Assumption 3.4 is valid. This fully justies the vast attention, both to the theory and to the applications, that the Kokotovic approach has ignited. Furthermore, the method has been used in design issues, say, establishing stability of the system (3.1) (without the cost element) for small via establishing the stability of the limit system, and verifying that it extends to the original system. We do not elaborate on such possibilities in this review. An account of the theory with many applications can be found in Kokotovic et al. [37]. A collection of papers on theory and on applications is Kokotovic et al. [36]. A review on the subject is Kokotovic et al. [38]; other review-type papers are OMalley [45], Kokotovic [35]. See also the more recent surveys Naidu [43] and Naidu and Calise [44]. There is a vast literature on the theory and its applications that can be found with standard search machines. 4. ODES WITHOUT THE CONVERGENCE ASSUMPTION In this section we introduce a method that addresses System 2.1, not requiring, however, Assumption 2.2. It is an approach suggested by the present author and collaborators for general systems, particular cases were examined much earlier (references are given below). Recall System 2.1 and the nite time interval [0, T ] on which we look for the limit characteristics as 0 of the solution (2.2). The starting point for the developments is that solutions of (2.3) when y is xed may not converge as s to an equilibrium, as needed in the Tikhonov model. Here is the (rather weak) assumption we need. Assumption 4.1. The family of solutions (y (), z ()) to (2.1) is uniformly bounded on [0, T ]. Following the exposition in the Section 2 we present now a limit system, whose solutions depict the limit behavior of solutions of (2.1) as 0. This limit system will not be an ordinary differential equation. Rather, it will employ invariant measures of (2.3). Recall the notion of an invariant mean sure of a differential equation, say, dx dt = f (x) with x R and f () Lipschitz in x. It is a probability measure, say , on the space Rn such that for each open set B in Rn the measure (B ) is preserved under the differential equation, namely, (B ) is equal to ((t, B )) where (t, x) is the solution of the equation at time t that starts with x at time 0. Limit System 4.2. Consider the differential-probability measure system dy = dt f (y, z )(dz ) (4.1)

with 0 (t) M (y0 (t)). Notice that Limit System 2.3 is a particular case of Limit System 4.2. In fact, the stationary point z (y ) reects an invariant measure of (2.3) with y xed, namely, the probability measure supported on {z (y )} is the invariant measure. Under Assumption 2.2 all the trajectories of (2.3) converge to the particular invariant measure. The collection M (y ) allows more exibility in the limit dynamics; a simple example, for instance, is a limit cycle, it gives rise an invariant measure in the obvious way; the Lebesgue measure is an invariant measure of the dynamics generated by irrationally dependent shifts on a torus, etc. The Observation below establishes that solutions of the Limit System 4.2 capture the limit behavior of solutions of System 2.1, this under Assumption 4.1. We need to understand, however, in what sense a trajectory (y0 (t), 0 (t)) is the limit of solutions of the form (2.3). To this end we recruit Young Measures. For background on Young measures and their roles in the description of limits of sequences of functions see, Balder [23], Pedregal [47], Valadier [56]. (The notion was introduced by L.C. Young, see [64] and references therein; it was further used under the name of relaxed control by J. Warga, see [62]; in the context of Partial Differential Equations it was employed by L. Tartar, see [52]; there are many other applications of the notion.) Here we proceed telegraphically: A Young measure is a probability measurevalued map; in our case it is 0 (t). We identify the Young measure with the measure on the product space, in our case [0, T ] Rn2 , obtained by integrating the Young measure. We identify a point-valued map, say z (t), with the Young measure whose values are Dirac measures supported on {z (t)}. the distance between Young measures, that in particular may induce convergence of z () to 0 , is taken to be the weak convergence of measures when the Young measures are interpreted as measures on the product space. Figure 2 portrays this limit notion.

Rn2

(dz ) M ! (y ) y (0) = y0 . where M (y ) for xed y is the family of invariant measures of (2.3) with y xed. Hence, the solution of (4.1) is of the form (y0 (t), 0 (t)) where y0 (t) is obtained by solving (4.1)

Figure 2 We are ready to state the main result within this approach. In what follows we consider convergence of trajectories of the form (y (), z ()); by the convergence we mean uniform convergence of the y -component and convergence in

Proceedings of the 9th Brazilian Conference on Dynamics Control and their Applications Serra Negra, SP - ISSN 2178-3667

1257

the sense of Young measures in the z -component. Observation 4.3. Under Assumption 4.1, on a short time interval past the initial time the fast trajectory converges to the vicinity of the support of an invariant measure in M (y0 ); furthermore, for small the solutions (y (), z ()) on [0, T ] of (2.1) are close in the Young measures sense to a solution of Limit System 4.2; in particular, for every sequence j there exists a subsequence i such that the corresponding trajectories converge to a solution of (4.1); if for every y the set M (y ) contains only one invariant measure, then (y (), z ()) converge, as 0, to the unique solution of (4.2). Figure 3 depicts the elements participating in the limit solution according to the previous observation.

general convergence; detailed analysis then depends on the specic system. The abstract mathematical considerations are based on the authors earlier work, see [3] and references therein, and was displayed within ordinary differential equations framework in Artstein and Vigodner [22]. For further developments and applications see [4], [5], [10] and Artstein and Slemrod [20], [21]. For instance, an example, reecting the singular limit of elastic structure in a rapidly owing nearly invicid uid, was examined in detail in [20]; it is of the form x 1 = x2 x 2 = 1 x1 2 x2 3 1 + 4 F (2 ) 1 = 2 2 = 1 1 + 2 F (2 ) 3 x1 4 x2 where the forcing term F ( ) takes the form: Fu ( ) = 3 + 5 7 . (4.4) (4.3)

The method can also be used in order to enhance computational considerations, since once computing the invariant measures, the progress of the slow dynamics is induced via averaging; see Artstein, Linshitz and Titi [19] for a numerical examination of (4.3)-(4.4). Contributions that use related structures can be found in E and Enquist [27], VandenEijnden [57]. See also the review Givon, Kupferman and Stuart [33]. 5. OPTIMAL CONTROL WITHOUT THE CONVERGENCE ASSUMPTION Figure 3 The possible abundance of invariant measures in each M (y ) may make the previous considerations not telling much. Yet, a number of applications have shown the relevance of the method. Particular examples in classical texts, e.g., in Bogoliubov and Mitropolsky [24], Pontryagin and Rodygin [48], already demonstrated the feasibility of the method. Another set of examples that adhere to this setting is, say, Hamiltonian systems written in action-angle coordinates, within the literature on averaging with fast angle coordinates. See Arnold [2]. The latter are usually presented as a regular perturbation of an ordinary differential equation, e.g., dI = f (I, ) ds d = g (I ) ds I (0) = I0 , (0) = 0 In this section we introduce a limit system to (3.1) that overcomes the limitations imposed by Assumption 3.4. We address, however, the structure (3.1), and start with the display of the, rather weak, assumption needed here. Assumption5.1. There exists a family of trajectories (y (), z (), u ()) of (3.1) that are near optimal (i.e., the associated costs get closer, as 0, to the optimal costs) and that are uniformly bounded on [0, T ]. The approach we follow is similar to the one in the previous section; a difference is that, to begin with, it is not clear what would play the role of an invariant measure of differential equation dz = g (y, z, u), (5.1) ds namely, what is the control analog of an invariant measure. To this end we resort to the characterization of invariant measures as limit occupational measures (limit of empirical measures) of solutions of (2.3) (it goes back to Kryloff and Bogoliuboff [39]) and apply the denition to (5.1). Thus, given a trajectory (z (), u()) of (5.1) with y xed, on [0, ), let be its occupational measure on the interval [0, ] (i.e., the measure on Rn2 U where (E ) is the proportion of time within [0, ] that the trajectory spends in E ). A limit occupational measure is a limit point as in the weak topology of measures, of the occupational measures .

(4.2)

with the shift on an n2 -dimensional torus. The invariant measures of the -dynamics is a torus whose dimensionality depends on g (I ); the evolution of the dynamics is induced by averaging. A change of variables s = t would put (4.2) in the form (2.1). The additional structure (4.2) enables a detailed analysis of the dynamics, for instance, estimates of crossing of resonance. The framework presented in this section applies to more general systems allowing more

Proceedings of the 9th Brazilian Conference on Dynamics Control and their Applications Serra Negra, SP - ISSN 2178-3667

1258

Now, let b be the bound assumed in Assumption 5.1 on near optimal solutions of the optimal control problem. For a xed y let Mb (y ) be the limit occupational measures induced by of solution of (5.1) with y xed, whose support (which is in the space Rn2 U ) is bounded by b. We can now display the limit system. It is related to Limit Control Problem 3.2 exactly as Limit System 4.2 is related to Limit System 2.3. Limit Optimal Control Problem 5.2. Consider the differential-probability measure control equation minimize subject to
T 0

dy = dt

Rn2 U T 0

c(y (t), z, u)(dz du)dt f (y, z, u))(dz du) (5.2)

Rn2 U

Mb (y ) y (0) = y0 , where Mb (y ) is as dened earlier; Hence, the solution of (5.2) is of the form (y0 (t), 0 (t)) where y0 (t) is obtained by solving (5.2) with 0 (t) Mb (y0 (t)) being an appropriately chosen limit occupational measure, chosen so to minimize the cost. Notice that the control variable is embedded in the limit occupational measure, namely, the latter, while composed by choosing the control function appropriately, serves as a control variable in the optimization problem (5.2). We state now two additional assumptions under which the main result of this section follows. One is a controllability property of (5.1) for xed y , namely, that any z1 and z2 bounded by b can be steered to each other within a prescribed time interval; second, the set-valued map Mb (y ) is Lipschitz with respect to the Hausdorff distance on the space of probability measures with the weak convergence. Observation 5.3. Under Assumption 5.1 and the controllability and Lipchitz continuity additional assumptions, on a short time interval past the initial time the fast state-control trajectory converges to the vicinity of the support of an invariant measure in Mb (y0 ); furthermore, for small the near optimal solutions (y (), z (), u ()) on [0, T ] of (5.2) are close, uniformly in the y -component and in the Young measure sense in the (z, u)-components, to a solution of Limit Optimal Control Problem 5.2; in particular, for every sequence j there exists a subsequence i such that the corresponding trajectories converge to a solution of (5.2); if, in turn, (y0 (t), 0 (t)) is a solution of (5.2) then for small there exists a trajectory (y (), z (), u ()) that approximates the optimal trajectory and the optimal cost. The previous observation has been stated and veried in a rigorous manner under several forms. Figure 3 depicts the structure alluded to in the observation when the z -space is replaced by the (z, u)-space. The need to address extensions of the Kokotovic method in order to cover general optimal control problems, namely,

the observation that optimal solutions may not converge on the fast dynamics to an equilibrium, has been observed long ago; see Dontchev and Veliov [25], [26], Gaitsgory [29]. Leizarowitz [40], [41] has in fact shown that Assumption 3.4 is not valid for a substantial number of optimal control problems. The use of probability measures as a control variable, in particular the limit occupational measures, stemmed from the authors work [3] and has been developed in Artstein and Gaitsgory [13], [14], Vigodner [60], see also [6]. The limit occupational measures have been studies by Gaitsgory and Leizarowitz [31], see also [16]; they are indeed invariant measures in a lifted space, see [9]. Characterizations of the limit occupational measures have been offered by Gaitsgory [30]. See [15] and a correction in [11] for a rigorous statement and proof of Observation 5.3; a counterexample for the possibility to drop the Lipschitz continuity from Mb (y ) was given by Alvarez and Bardi [1] (see also [11]). Further applications have been developed in [8], [12]. A specic application whose solution was found in [8] is maximize
1 0

| z1 (t) 2z2 (t)|dt (5.3)

subject to

dz1 = z1 + u dt dz2 = 2z2 + u, dt

(the occupational measure structure of the solution occurs even without the slow state variable). The method has been also utilized in numerical considerations, see Finlay et al. [28] and Gaitsgory and Rossomakhine [32]. 6. ODES WITHOUT SPLIT SLOW AND FAST STATES In this section we consider the coupled slow and fast dynamics when the split x = (y, z ) of the state x to slow and fast state variables may not be available. Lack of availability of this split may be due to the difculty of identifying the slow and fast variables, although they exist (in this respect see Roberts [49]). But, it may well happen that the state space cannot altogether be split into slow and fast components. The material of this section follows the theoretical contribution in Artstein, Kevrekidis, Slemrod and Titi [18]; an application is displayed, along with numerical results, in Artstein, Gear, Kevrekidis, Slemrod and Titi [17]. See also Slemrod [50] for a brief overview of these two papers. The equation we treat has the following structure. System 6.1. Consider dx 1 = G(x) + F (x) dt x(0) = x0 . Notice that there are no slow and fast state variables, yet there are slow and fast contributions to the dynamics, namely, the G(x) and F (x) components.

(6.1)

Proceedings of the 9th Brazilian Conference on Dynamics Control and their Applications Serra Negra, SP - ISSN 2178-3667

1259

The technical conditions we place on the right hand side of (6.1) are that both G() and F () are Lipschitz functions. The underlying assumptions we employ is also minimal. Assumption 6.2. The family of solutions x () to (6.1) emanating from initial conditions x0 in a bounded set, is uniformly bounded on [0, T ], say by b. Already under the above assumption the limit as 0 of solutions of (6.1) has some structure. Recall the discussion about Young measures in Section 4. Observation 6.3. Under Assumption 6.2 every sequence j 0 has a subsequence i such that the solutions xi () converge in the sense of Young measures to a Young measure, say, 0 () and the values 0 (t) of the limit are all invariant measures of dx = G(x). (6.2) dt The previous observation carries some information about the location of the limits of solutions x () to (6.1), but it does not reveal how the values of the limit measure-valued map 0 () evolve. Recall that when slow and fast state variables are available, the limit measure was dened on the fast state, parameterized by the slow state and a differential equation for the evolution of the latter was available, see Limit System 4.1. Here we suggest that the values of 0 () be parameterized by slow observables. These are rst integrals of the differential equation (6.2), namely, real-valued functions preserved by the dynamics (6.2). Such slow observables are, in particular, constant on the support of each invariant measure 0 (t) and the evolution of the slow observable may reveal information about the evolution of the invariant measure (exactly as the evolution of the slow state, say the I element in (4.2), reveals informations about the invariant measure, say the limit torus in the -space in (4.2)). The relevant observation is as follows. Observation 6.4. Under Assumption 6.2 let v () be a continuously differentiable slow observable (namely each v (t) is a rst integral) of (6.2). Then v () satises the differential equation dv = dt
Rn

The above structure has been applied in [17] to efciently produce numerical results to the evolution of both the slow observables and the limit invariant measure. 7. OPTIMAL CONTROL WITHOUT SPLIT SLOW AND FAST STATES In this section we report telegraphically on research in progress, that examines optimal control problems where the split to slow and fast state variables is not available. The outline and reasoning of the discussion follow the considerations in the previous sections. Optimal Control Problem 7.1.

minimize subject to

T 0

c(x(t), u(t))dt (7.1)

dy 1 = G(x, u) + F (x, u) dt x(0) = x0 .

Assumption 7.2. A family of near optimal solutions (x (), u ()) to (7.1) exists, where the trajectories are uniformly bounded (in Rn U ), say by b on [0, T ]. Observation 7.3. Under Assumption 7.2 every sequence j 0 has a subsequence i such that the trajectories (xi (), ui ()) converge in the sense of Young measures to, say, 0 () and the values 0 (t) of the limit are all limit occupational measures of dx = G(x, u) (7.2) dt with supports in Rn U bounded by b. With the machinery developed so far the previous observation is straightforward. This suggests the following limit problem. Limit Optimal Control Problem 7.4.

(v )(x) G(x) 0 (t)(dx), v (0) = v (x0 ). (6.3)

minimize subject to

T 0 Rn U

c(x, u)(dx du)dt (7.3)

Notice that before 0 (t) becomes known, the relation (6.3) is not an ordinary differential equation as it may not determine the evolution of 0 (t) itself. If a system of slow observables determines the invariant measure then equations (6.3) for the ensemble of the observables is a closed system. The abstract developments above were applied in [17] to the concrete equation Uk+1 2Uk + Uk1 dUk Uk (Uk+1 Uk1 ) = . (6.4) + 2h d h2 that forms a special discretization of a KdV-Burgers type equations. The fast system possesses polynomial slow observables, as had been established by Goodman and Lax [34].

Mb x(0) = x0 ,

where Mb is the space of limit occupational measures with support (in Rn U ) bounded by b. The solution to Problem 7.4 is a Young measure 0 () that minimizes the cost functional among the Young measures whose values are limit occupational measures. Under a strong controllability assumption the optimal control problem (7.3) captures the limit behavior of near optimal solutions and its solution gives rise to near optimal solutions in the same manner as is done in Section 5. An example to such a problem is (5.3), worked out in detail in [8]. With lack of

Proceedings of the 9th Brazilian Conference on Dynamics Control and their Applications Serra Negra, SP - ISSN 2178-3667

1260

full controllability, one has to divide the space into controllable subsets, that in turn depend on the initial condition, and examine the possibility to move from one such set to another via the slow drift. When such a condition on the evolution 0 (t) is added to (7.3), one gets an appropriate limit control problem. REFERENCES [1] O. Alvarez and M. Bardi, Singular perturbations of nonlinear degenerate parabolic PDEs: a general convergence result. Arch. Ration. Mech. Anal. 170 (2003), 17-61. [2] V.I. Arnold, Geometrical Methods in the Theory of Ordinary Differential Equations. Springer-Verlag, New York, 1983. [3] Z. Artstein, Chattering variational limits of control systems. Forum Mathematicum 5 (1993), 369-403. [4] Z. Artstein, Stability in the presence of singular perturbations. Nonlinear Analysis 34 (1998), 817-827. [5] Z. Artstein, Singularly perturbed ordinary differential equations with nonautonomous fast dynamics. J. Dynamics and Diff. Equa. 11 (1999), 297-318. [6] Z. Artstein, The chattering limit of singularly perturbed optimal control problems. Proceedings of the 39th IEEE Conference on Decision and Control, Sydney, Australia, December 2000, pp. 564-569. [7] Z. Artstein, On singularly perturbed ordinary differential equations with measure-valued limits. Mathemtica Bohemica 127 (2002), 139-152. [8] Z. Artstein, An occupational measure solution to a singularly perturbed optimal control problem. Control and Cybernetics, 31 (2002), 623-642. [9] Z. Artstein, Invariant measures and their projections in nonautonomous dynamical systems. Stochastics and Dynamics 4 (2004), 439-459. [10] Z. Artstein, Distributional convergence in planar dynamics and singular perturbations. J. Differential Equations 201 (2004), 250-286. [11] Z. Artstein, On the value function of singularly perturbed optimal control systems. Proceedings of the 43rd IEEE Conference on Decision and Control, Paradise Island, Bahamas 2004, pp. 432-437. [12] Z. Artstein, Bang-Bang controls in the singular perturbations limit. Control and Cybernetics 34 (2005), 645663. [13] Z. Artstein and V. Gaitsgory, Tracking fast trajectories along a slow dynamics: A singular perturbations approach. SIAM J. Control and Opt. 35 (1997), 14871507.

[14] Z. Artstein and V. Gaitsgory, Linear-quadratic tracking of coupled slow and fast targets. Math. Cont. Sign. Syst. 10 (1997), 1-30. [15] Z. Artstein and V. Gaitsgory, The value function of singularly perturbed control systems. Applied Mathematics and Optimization 41 (2000), 425-445. [16] Z. Artstein and V. Gaitsgory, Convergence to compact convex sets in innite dimensions. J. Mathematical Analysis and Applications 284 (2003), 471-480. [17] Z. Artstein, C. W. Gear, I. G. Kevrekidis, M. Slemrod and E. S. Titi, Analysis and computation of a discrete KdV-Burgers type equation with fast dispersions and slow diffusion. Article available at arXiv:0908.2752v1 [18] Z. Artstein, I. G. Kevrekidis, M. Slemrod and E. S. Titi, Slow observables of singularly perturbed differential equations. Nonlinearity 20 (2007), 2463-2481, on line at: http://stacks.iop.org/0951-7715/20/2463 [19] Z. Artstein, J. Linshiz and E. S. Titi, Young measure approach to computing slowly advancing fast oscillations. Multiscale Modeling and Simulation 6 (2007), 1085-1097. [20] Z. Artstein and M. Slemrod, On singularly perturbed retarded functional differential equations. J. Differential Equations 171 (2001), 88-109. [21] Z. Artstein and M. Slemrod, The singular perturbation limit of an elastic structure in a rapidly owing invicid uid. Quarterly of Applied Mathematics 59 (2000) 543555. [22] Z. Artstein and A. Vigodner, Singularly perturbed ordinary differential equations with dynamic limits. Proceedings of the Royal Society of Edinburgh 126A (1996), 541-569. [23] E.J. Balder, Lectures on Young measure theory and its applications to economics. Rend. Istit. Mat. Univ. Trieste 31 (2000), supplemento 1, pp.1-69. [24] N.N. Bogoliubov and Y.A. Mitropolsky, Asymptotic Methods in the Theory of Non-Linear Oscillations. Hindustan Publishing Co., Delhi, 1961. [25] A.L. Dontchev and V.M. Veliov, Singular perturbations in linear control systems with weakly coupled stable and unstable fast subsystems. J. Math. Anal. Appl. 110 (1985), 130. [26] A.L. Dontchev and V.M. Veliov, On the order reduction of linear optimal control systems in critical cases. In: Systems and Optimization, A. Bagchi and H.Th. Jongen, Eds., Lecture Notes in Control and Inform. Sci., 66, pp. 6173, Springer, Berlin, 1985. [27] W. E and B. Engquist, The heterogeneous multiscale methods. Commun. Math. Sci. 1 (2003), 87132.

Proceedings of the 9th Brazilian Conference on Dynamics Control and their Applications Serra Negra, SP - ISSN 2178-3667

1261

[28] L. Finlay, V. Gaitsgory and I. Lebedev, Duality in linear programming problems related to deterministic long run average problems of optimal control. SIAM J. Control Optim. 47 (2008), 16671700. [29] V. Gaitsgory, On the use of the averaging method in control problems. Differential Equations 22, 1290 1299. [30] V. Gaitsgory, On a representation of the limit occupational measures set of a control system with applications to singularly perturbed control systems. SIAM J. Control Optim. 43 (2004), 325-340. [31] V. Gaitsgory and A. Leizarowitz, Limit occupational measures set for a control system and averaging of singularly perturbed control systems. J. Mathematical Analysis and Applications 233 (1999), 461-475. [32] V. Gaitsgory and S. Rossomakhine, Linear programming approach to deterministic long run average problems of optimal control. SIAM J. Control Optim. 44 (2006), 20062037. [33] D. Givon, R Kupferman and A. Stuart, Extracting macroscopic dynamics: model problems and algorithms. Nonlinearity 17 (2004), R55R127. [34] J. Goodman and P.D. Lax, On dispersive difference schemes. I. Comm. Pure Appl. Math. 41 (1988), 591 613. [35] P.V. Kokotovic, Applications of singular perturbations techniques to control problems. SIAM Rev. 26 (1984), 501-550. [36] P.V. Kokotovic and H.K. Khalil, Singular Perturbations in Systems and Control. IEEE Selected Reprint Series, IEEE Press, New York, 1986. [37] P.V. Kokotovic, H.K. Khalil and J. OReilly, Singular Perturbation Methods in Control: Analysis and Design. Academic Press, London, 1986. reprinted as Classics in Applied Mathematics 25, SIAM Publications, [38] P.V. Kokotovic, R.E. OMalley and P. Sannuti, Singular perturbations and order reduction in control theory - An overview. Automatica 12 (1976), 123-132. [39] N. Kryloff and N. Bogoliuboff, La thorie gnrale de la mesure dans son application letude des systmes dynamiques de la mcanique non linaire. Ann. of Math. 38 (1937), 65-113. [40] A. Leizarowitz, Order reduction is invalid for singularly perturbed control problems with vector fast variable. Math. Control Signals and Systems 15 (2002), 101119. [41] A. Leizarowitz, On the order reduction approach for singularly perturbed optimal control systems. Setvalued Analysis, 10 (2002), 185-207.

[42] N. Levinson, Perturbations of discontinuous solutions of non-linear systems of differential equations. Acta Math. 82 (1950), 71106. [43] D.S. Naidu, Singular perturbations and time scales in control theory: An overview. Dynam. Cont. Discrete. Impul. Systems Ser. B 9 (2002), 233-278 [44] D.S. Naidu and A.J. Calise, Singular perturbations and time scales in guidance and control of aerospace systems: A survey. J Guid. Control Dynam. 24 (2001), 1057-1078. [45] R.E. OMalley, Jr., Singular perturbations and optimal control. In Mathematical Control Theory, W.A. Coppel, ed., Lecture Notes in Mathematics 680, SpringerVerlag, Berlin, 1978, pp. 170-218. [46] R.E. OMalley, Jr., Singular Perturbation Methods for Ordinary Differential Equations. Springer-Verlag, New York, 1991. [47] P. Pedregal, Parameterized Measures and Variational Principles. Birkhuser Verlag, Basel, 1997. [48] L.S. Pontryagin and L.V. Rodygin, Approximate solution of a system of ordinary differential equations involving a small parameter in the derivatives. Dokl. Akad. Nauk SSSR 131 255258 (Russian); translated as Soviet Math. Dokl. 1 (1960), 237240. [49] A.J. Roberts, Create useful low-dimensional models of complex dynamical systems. Technical report, http://www.sci.usq.edu.au/staff/robertsa/Modelling/, 2005. [50] M. Slemrod, Averaging of fast-slow systems. Proceedings 6th Conference Algorithms for Approximation, A. Gorban, Ed. Ambleside, UK, Aug 31- Sept 4, 2009. [51] D.R. Smith, Singular-Perturbation Theory. Cambridge University Press, Cambridge, 1985. [52] L. Tartar, Compensated compactness and applications in partial differential equations. Nonlinear Analysis and Mechanics, R.J. Knops ed., Pitman, London, 1979, pp. 136-211. [53] A.N. Tikhonov, On the dependence of the solutions of differential equations on a small parameter. (Russian) Mat. Sbornik N.S. 22(64), (1948). 193204. [54] A.N. Tikhonov, Systems of differential equations containing small parameters in the derivatives. Mat. Sbornik N.S. 31 (1952), 575-586. [55] A.N. Tikhonov, A.B. Vasilva and A.G. Sveshnikov, Differential Equations. Springer-Verlag, Berlin, 1985. [56] M. Valadier, A course on Young measures. Rend. Istit. Mat. Univ. Trieste 26 (1994) supp., 349-394. [57] E. Vanden-Eijnden, Numerical techniques for multiscale dynamical systems with stochastic effects. Communications Mathematical Sciences 1 (2003), 385-391.

Proceedings of the 9th Brazilian Conference on Dynamics Control and their Applications Serra Negra, SP - ISSN 2178-3667

1262

[58] A.B. Vasilva, Asymptotic behavior of solutions to certain problems involving nonlinear differential equations containing a small parameter multiplying the highest derivative. Russian Math. Surveys 18 (3) (1963), 1586. [59] F. Verhulst, Methods and Applications of Singular Perturbations. Texts in Applied Mathematics 50, Springer, New York, 2005. [60] A. Vigodner, Limits of singularly perturbed control problems with statistical dynamics of fast motions. SIAM J. Control Optim. 35 (1997), 1-28 [61] V.M. Volosov, Averaging in systems of ordinary differential equations. Russian Math. Surveys 17 (6) (1962), 1-126. [62] J. Warga, Optimal Control of Differential and Functional Equations. Academic Press, New York, 1972. [63] W. Wasow, Asymptotic Expansions for Ordinary Differential Equations. Wiley Interscience, New York, 1965. [64] L.C. Young, Lectures on the Calculus of Variations and Optimal Control Theory. Saunders, New York, 1969.

Proceedings of the 9th Brazilian Conference on Dynamics Control and their Applications Serra Negra, SP - ISSN 2178-3667

1263

Potrebbero piacerti anche