Sei sulla pagina 1di 8

5

Cost-Cumulants and
Risk-Sensitive Control
Chang-HeeWon 5.1 Introduction ....................................................................................... 1061
Department of Electrical Engineering, 5.2 Linear-Quadratic-Gaussian Control ........................................................ 1061
University of North Dakota, 5.3 Cost-Cumulant Control ........................................................................ 1062
Grand Forks,
5.3.1 M i n i m a l Cost Variance C o n t r o l
North Dakota, USA
5.4 Risk-Sensitive Control .......................................................................... 1063
5.5 Relationship Between Risk-Sensitive and Cost-Cumulant Control ................ 1064
5.6 Applications ........................................................................................ 1065
5.6.1 Risk-Sensitive C o n t r o l Applied to Satellite Attitude M a n e u v e r
5.6.2 M C V Control Applied to Seismic Protection of Structures
5.7 Conclusions ........................................................................................ 1067
References .......................................................................................... 1068

5.1 Introduction dx(t) = Ax(t)dt + Bk(t, x)dt + E(t)dw(t).


(5.1)
y(t)dt = Cx(t)dt + dr(t).
Cost-cumulant control, also known as statistical control, is an
optimal control method that minimizes a linear combination Here w(t) and v(t) are vector Brownian motions. The meaning
of quadratic cost cumulants. Risk-sensitive control is an opti- of a Brownian motion, such as w(t), can be given directly or in
mal control method that minimizes the exponential of the terms of its differential dw(t). In the latter case, the dw(t) is a
quadratic cost criterion. This is equivalent to optimizing a Gaussian random process with zero mean, covariance matrix
denumerable sum of all the cost cumulants. W dr, and independent increments. A similar description ap-
Optimal control theory deals with the optimization, either plies to dr(t), with covariance V dt. It is assumed that dw(t)
minimization or maximization, of a given cost criterion. and dr(t) are independent. The matrices A, B, C, and E are of
Linear-quadratic-Gaussian control, minimum cost-variance compatible size. It should be remarked that the formalism of
control, and risk-sensitive control are discussed in terms of equation 5.1 is that of a stochastic differential equation. Intui-
cost cumulants. Figure 5.1 presents an overview of the optimal tively, one thinks of dividing both sides of this equation by dt
control and the relationships among different optimal control to obtain the more colloquial form. But the formal derivative
methods. of a Brownian motion which is known as white noise--is not
a well-defined random process, and this motivates an alternate
way of thinking.
5.2 Linear-Quadratic-Gaussian Control The quadratic cost criterion is given by:

The linear quadratic Gaussian (LQG) control method opti-


mizes the mean, which is the first cumulant, of a quadratic cost l(k) = [ (x'Qx + k'Rk)dt. (5.2)
criterion (Anderson and Moore, 1989; Davis, 1977; Kwarker-
naak and Sivan, 1972).
Typical system dynamics for LQG control are given by the The weighting matrix Q is a symmetric and positive semidefi-
stochastic equation: nite matrix, and R is a symmetric and positive definite matrix.

Copyright © 2005 by AcademicPress. 1061


All rights of reproduction in any form reserved.
1062 Chang-Hee Won

f The solution of the output feedback LQG problem is found


Stochastic control
using the certainty equivalence principle. The optimal con-
trol is found using a Kalman filter, where an optimal estimate )2
Cost-cumulant control is obtained such that E { ( x - 2 ) ' ( x - 2)1 is minimum. Then
this estimate is used as if it were an exact measurement of the
1(~"7 V'~I2 state to solve the deterministic LQG control.
For the output feedback case, the estimated states are given by:
LQG
d2
Minimal c o s t ~ U Risk-sensitive dt = Ax + Bk + P C V l(y _ C;~), (5.6)
variance control

where P satisfies the forward Riccati equation:

P(t) = W + AP(t) + P(t)A' - P ( t ) C ' V -l CP(t). (5.7)

In equation 5.7, the initial condition is P(O) = cov(x0). Finally


the optimal output feedback controller is given as:

k(t, x) = - R 1B'II(t)2(t). (5.8)

H-infinity control ~ Gametheory 5.3 C o s t - C u m u l a n t Control


N 6 I

5.3.1 Minimal Cost Variance Control


M i n i m u m cost variance (MCV) control is a special case of
Deterministic control cost-cumulant or statistical control where the second cumu-
lant, variance, is minimized, whereas the first cumulant, mean,
is kept at a prespecified level.
FIGURE 5.1 Relationship Between Various Optimal and Robust Here, open-loop MCV and full-state feedback MCV control
Control Methods. (Sain et al., 2000; Jacobson, 1973; Won and Sain, laws are discussed. An open-loop control law is a function
1995; Glover and Doyle, 1988; Whittle, 1990; Rhee and Speyer, 1992; u: [0, tF] -~ u where u is some specified allowable set of con-
Jacobson, 1973; Runolfsson, 1994; Uchida, 1989; Basar and Bernhard, trol values. A closed-loop or feedback control law is a func-
1991.)
tion that depends on time and the past evolution of the process
[i.e., u(t, x(s); 0 < s < t)1.
The LQG control problem then becomes a minimization of the
mean quadratic cost over feedback controller k: Open-Loop MCV Control
Consider a linear system (Sain and Liberty, 1971):
1" = min E{l(k)}. (5.3)
k
dx(t) = A ( t ) x ( t ) d t + B ( t ) u ( t ) d t + E(t)dw(t), (5.9)
The full-state feedback control problem is to choose the control
k as a function of the state xso that the cost criterion of equation and the performance measure:
5.3 is minimized. The general partial observation or output
feedback control problem is to choose the control k as a function
of the observation yso that the cost of equation 5.3 is minimized. l = J [x'(t)Qx(t) + u'(t)Ru(t)]dt + x'(tF)QFX(tF), (5.10)
Assume now that the problem has a solution of the qua- o

dratic form l xrIIx. The matrix H can be found from the


Riccati equation: where w(t) is zero mean with white characteristics relative to
the system, tF is the fixed final time, x(t) C R n is the state of the
0 = l~I(t) + Q + A'II(t) + II(t)A - II(t)BR-1B'II(t), (5.4) system, and u(t) E ~m is the control action. Note that:

E{dw(t)dw'(t)} = W dt. (5.11)


where II(tF) = 0.
Then the full-state feedback optimal controller is given by The fundamental idea behind minimal cost-variance control
(Davis, 1977): is to minimize the variance of the cost criterion J:

k(t, x) = - R IB'II(t)x(t). (5.5) JMv = VARk{JI, (5.12)


5 Cost-Cumulant and Risk-Sensitive Control 1063

while satisfying a constraint: The equation also has the cost criterion:

Ek{l} = M , (5.13) tF

](t, x(t), k) = J [x(t)'Qx(t) + k'(t, x)R(t)k(t, x)]ds


where I is the cost criterion and where the subscript k on E
t
denotes the expectation based on a control law k generating the
control action u(t) from the state x(t) or from a measurement + x'(tE)QFx(tF). (5.26)
history arising from that state. By means of a Lagrange multi-
plier IX corresponding to the constraint of equation 5.13, one In MCV control, we define a class of admissible controllers,
and then the cost variance is minimized within that class of
can form the function:
controllers. Define Vl(t, x; k) = E{J(t, x(t), k)[x(t) = x} and
JMv = IX(Ek{l} -- M) + VARk{J}, (5.14) V2(t, x; k) = E{j2(t, x(t), k)[x(t) = x}. A function M is an
admissible mean cost criterion if there exists an admissible
which is equivalent to minimizing: control law k such that:

]MY = tXEk{]} + VARk{J}. (5.15) Vl(t, x; k) = M(t, x), (5.27)

A Riccati solution to JMv minimization is developed for the for all t E [0, tF] and x E R n.
open-loop case: A minimal mean cost-control law k~ satisfies V1(t, x; k~) =
V~(t,x) <_ V l ( t , x ; k ) for t E r , x E R ~ and for k, an ad-
u(t) = k(t, x(0)). (5.16)
missible control law. An MCV control law kvl M satisfies
The solution is based on the differential equations: V2(t, x; kvlM) = Vf(t, x) <_ V2(t, x; k) for t E T, x E N~
whenever k is admissible. The corresponding minimal cost
variance is given by V * ( t , x ) = V 2 * ( t , x ) - M 2(t,x) for
~(t) = A(t)z(t) - ~B(t)R 1B'(t)~(t). (5.17)
t E T, x E N~. Here the full-state feedback solution of the
~(t) = -A'(t)(fft) - 2Qz(t) - 8txQv(t). (5.18) MCV control problem is presented for a linear system and a
quadratic cost criterion.
i,(t) = A ( t ) v ( t ) + E(t)WE'(t)y(t). (5.19) Then the linear optimal MCV controller is given by (Sain et
al., 2000):
~(t) = - A ' ( t ) y ( t ) - Qz(t). (5.20)

These equations have the boundary conditions: KviM(t, X) = - R - l ( t ) B ' ( t ) [ . M ( t ) + y(t) V(t)]x,

z(O) = x(0). (5.21) where .M and V are the solutions of the coupled Riccati-type
equations (suppressing the time argument):
(fftf) = 2QF z(tf) + 8DQFV(tF). (5.22)
v(0) = 0. (5.23) 0 = A)t + A ' M + M A + Q - MBR-1B'Ad + ~2dkdBR-1Blid.
y(tF) = QFZ(tF). (5.24) (5.28)

The equations also have the control action relationship: 0 = ~ / + 4;k4EWffAd + Alid + IdA - dkdBR-lffId

- VBR 1B'Ad - 2"IVBR-lffV, (5.29)


v(t) = - I - R 1B(t)p(t). (5.25)
2
with boundary conditions Ad(tF) = Q F and Id(tF) = 0. Once
The variable z(t) is the mathematical expectation of x(t). The again, if y approaches zero, classic LQG results are obtained.
variable p(t) corresponds to the costate variable of optimal This MCV idea can be generalized to minimize any cost
control theory because it is the variable that enforces the cumulants. Viewing the cost function as a random variable
differential equation constraint between z(t) and v(t). The and optimizing any cost cumulant is called cost-cumulant or
variable v(t) and y(t) are introduced to reduce the integro- statistical control.
differential equation.

Full-State Feedback Minimal Cost-Variance Control 5.4 Risk-Sensitive Control


Consider the Ito sense stochastic differential equation (SDE)
with control (Sain et al., 2000): A large class of control systems can be described in state-
variable form by the stochastic equations (Anderson and
dx(t) = [A(t)x(t) + B(t)k(t, x)] dt + E(t) dw(t). Moore, 1989; Whittle, 1996):
1064 Chang-Hee Won

dx(t) = A x ( t ) d t + Bk(t, x)dt + dw(t). Yc(t) = (I + OP(t)H(t)) -1 (~(t) + OP(t)~r(t)). (5.39)


(5.30)
y ( t ) d t = Cx(t)dt + dr(t).
As 0 approaches zero, the cost criterion of equation 5.31
Here, x(t) is a 2n-dimensional state vector, k(t,x) is an becomes Ek{l}, and the matrices II and P are obtained from
m-dimensional input vector, w(t) is a q-dimensional distur- the Riccati equations:
bance vector of Brownian motions, y(t) is a p-dimensional
vector of output measurements, and v(t) is an r-dimensional O = l~I(t) + Q + A'H(t) + II(t)A - II(t)BR-IB'II(t). (5.40)
output noise vector of Brownian motions that affect the meas-
urements being taken. P(t) = W + AP(t) + P(t)A' - P ( t ) C ' V - 1 C P ( t ) . (5.41)
The risk-sensitive cost criterion is given by:
Thus, the classic LQG result is obtained as 0 approaches zero:
IRs(O) = - 0 -1 log Ek{e-°l}, (5.31)

where ] is the classical quadratic cost criterion: 5.5 Relationship Between Risk-Sensitive
and Cost-Cumulant Control
l = I (x'Qx + k'Rk)dt. (5.32)
To see the relationship between RS and cost-cumulant control,
consider a cost criterion:
The RS control problem then becomes a minimization of
tF
the cost IRs(O) over feedback controller k:
J= j [x(t)'Qx(t) + ld(t,x)R(t)k(t,x)]ds + x'(tE)QFX(tF). (5.42)
]~s(O)=min]Rs(O). (5.33) 0

Classical LQG control minimizes the first cumulant or the


Assume a solution of the quadratic form l x t H x - ~r'x+
mean of the cost criterion of equation 5.42. In MCV control,
(terms independent of x). The matrix II can be found from
the second cumulant of equation 5.42 is minimized while the
the Riccati-type equation:
mean is kept at a prespecified level. Furthermore, RS control
0 = fI(t) + Q + A'II(t) + H(t)A - U ( t ) ( B R 1B' + 0 W ) H ( t ) , minimizes an infinite linear combination of the cost cumu-
lants. To see this, consider an RS cost criterion:
(5.34)

where II(tF) = 0. lRS = --0 1 log (E{ exp ( - 01)}), (5.43)


Then, the full-state feedback optimal controller is given by
(Whittle, 1996): where 0 is a real parameter and E denotes expectation. Then,
the moment-generating function or the first characteristic
k(t, x) = , R - 1 B ' I I ( t ) x ( t ) + R 1B'cr(t), (5.35) function is given by:

where (y(t) + (A - H(B'R-1B ' + 0W))' cr(t) = 0 is a back- qb(s) = E e x p ( - sl). (5.44)
ward linear equation. The matrix P satisfies the forward
Riccati-type equation: The cumulant generating function tb(s) is defined by:

P(t) = W + AP(t) + P(t)A' - P(t) (C'V-aC + 0Q)P(t), (5.36)


(_1)~
t~(s) = log +(s) = Z T ~isi' (5.45)
where P ( 0 ) = (Xo). The updating equation for the risk- i=1
sensitive Kalman filter is given by:
in which the {[3i} are known as the cumulants or sometimes as
d~ the semi-invariants of ]. Now by comparing equations 5.43,
dt = A x + Bk + PCV -1 (y - Cz~) - 0PQ~, (5.37)
5.44, and 5.45, it is noted that:

where zt(0) = 0. :~ denotes the mean of x conditional on the


initial information, current observation history, and previous i(])(O) i ,
:Rs : ( - o -~ (5.46)
control history. Finally, the optimal output feedback controller
is given as (Whittle, 1996):
where f3i(]) denotes the ith cumulant of J with respect to the
k(t, x) = -R-1B'II(t)fc(t) + R 1B'cr(t), (5.38) control law k. Thus, it is important to note that the RS cost
criterion is an infinite linear combination of the cost cumu-
where 2 is the minimal-stress estimate of x, given by: lants. Moreover, approximating to the second order:
5 Cost-Cumulant and Risk-Sensitive Control 1065

0 The dw/dt is Gaussian white noise representing the disturbance


JRS = [31(1) -- ~[32(1) + 0(02) torque, dv/dt is Gaussian white noise representing the measure-
(5.47)
ment noise, hw is the wheel momentum, o~ is the angle
= E{J} - e_ VAR{J} + 0 ( 0 2 ) .
2
that the positive roll axis makes with the magnetic torquer, toc
is the orbital rate, Iii is the moment of inertia of the
Therefore, the minimal cost mean and minimal cost variance ith axis, x = [y, r,'~, k] is the state with yaw (',/), roll (r),
problems can be viewed as first- and second-order approxima- m is a dipole moment of the magnetic torquer (control),
tions of the RS control problem respectively. Minimizing the Be = 1.07 x 10 7 telsa is the nominal magnetic field strength,
VAR{J} under the restriction that the first cumulant E{J} exists and I4×4 is an identity matrix with dimension four. The expected
is called the minimal cost variance (MCV) problem. More- value of dwldt is zero with E{ d w / d t x dw/dt} = 0.7Be, and the
over, minimizing any linear combination of cost cumulants expected value of dv/dt is zero with E{dv/dt x dr~dr'} =
under certain restrictions would be called cost-cumulant or 1 x 10 7. Here O = 5 x 10 2 was chosen for the demonstration
statistical control. Thus, classical LQG control (optimization of purpose, but this risk-sensitivity parameter, 0 should be viewed
the first cumulant), MCV control (optimization of the second as another design parameter just like the weighting matrices Q
cumulant), and RS control (optimization of the infinite and R. By varying this O, different performance and stability
number of cumulants) are all special cases of the cost-cumu, results can be obtained. Theoretically, all Othat give a solution to
lant control problem. the Riccati equation 5.36 are possible. The next example shows
how to choose this risk-sensitivity parameter to obtain larger
stability margin. The constants for the operational mode are
5.6 Applications given a s /11 = 1988kg "m2, 122 = 1876kg'm 2, 112 =/21 = 0,
hw = 55 kg-m2/s, toc = 0.00418 deg/s, and 0 = 60 deg. These
An application of risk-sensitive control to satellite attitude values are actual parameters of the geostationary satellite. The
maneuver is given in this section. An application of minimal initial condition is [0.5 deg, 0, 0, 0.007 deg/sec]. Finally,
cost variance control to an earthquake structure control is also the weighting matrices are chosen to be Q = I4x4 and
given here. For linear quadratic Gaussian applications, see R = 1 x 10 10
Anderson and Moore (1989), Fleming and Rishel (1975) and In this model, the states are measured with the sensor noise,
Kwarkernaak and Sivan (1972). For more risk-sensitive control dv/dt. A Kalman filter is then used to estimate the states. The
examples, refer to Bensoussan (1992) and Whittle (1996). following simulations are performed using MATLAB, a soft-
ware package. The RS controller is found using equation 5.35.
5.6.1 Risk-Sensitive Control Applied to Satellite Note that both yaw and roll angles reduce to a value close to
Attitude Maneuver the origin. Figure 5.2 shows the roll and yaw angles with
respect to time variation. After about 3 hours, both roll and
This subsection shows the simulation results associated with yaw angles stay below 0.1 degree. Initially, large control action
the model of a geostationary satellite equipped with a bias is needed, but after 3 hours or so, less than 300 Atm 2 magnetic
momentum wheel on the third axis of body frame. This
model assumes that the disturbance torque is Gaussian white
noise. A stochastic RS controller is then applied. For this
model, small attitude angle and roll/yaw dynamics are assumed
to be decoupled from the pitch dynamics.
A roll/yaw attitude model of the geostationary satellite is
°5i
o.4

010
.............. i ................ i ................ ! ................ ! .................

simplified as the following linear differential equation of


............ i ................ i ................. i ................. i
hw >> max {Ii, toe}: %-% : : . .
. . . . . . . . . . . . . . . . .

0 ' ~AZ'!;:
~.'.:;.'~ : : : :
0 0 0 1 0.1 ,- ..... ~ ................ ~ ................. : ................. : .................

clx( t ) = hwo,c hw x(t)dt "~


"~..v4,, :
v¢:.~..3,{~, ~ 2~,.a;...
i. i .. t,e.,
,, .4...
.. :...
Itt 0 0 -- II~ ,~%~-* ">'ie ' : " • '"" " " ".'~'. ~ ,.
0 " :.~' ~ '*

0 hwoac hw 0
I22 I22

+
r ° 0
m(t)dt + dw(t)
(5.48) -0.2-0"1 ...... ; ............. -.-i ................ i ................. !i.................

cos (c~)
-0'30 5 10 15 20 25
L sin (oO L/~2J Time [hours]

dy(t) = I4x4X(t)dt + dr(t) (5.49) FIGURE 5.2 Roll (Dark) and Yaw (Light) Versus Time, RS Control
1066 Chang-Hee Won

0.5 5.6.2 M C V C o n t r o l A p p l i e d to Seismic P r o t e c t i o n


o f Structures
A 3DOR single-bay structure with an active tendon controller
as shown in Figure 5.4 is considered here. The structure is
subject to a one-dimensional earthquake excitation. If a simple
shear frame model for the structure is assumed, the governing
........ i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

equations of motion in state space form can be written as:


7>. o., LG,:..:i; ............... !................. !................. i.................
...... ":.'.,~.+~,~ , i i i

0 I ] x(t)dt
dx(t) = _MT1Ks _MT1C,
-o.1 . . . . . . . . . . . . . ~. . . . . . . . . . . . . . . . . ~. . . . . . . . . . . . . . . . . ::. . . . . . . . . . . . . . . . . ::. . . . . . . . . . . . . . . . .

+ E] 0
MT1B,
u(t)dt +
E°I -Fs
dw(t),

-0"30 5
i 10
............. 15
. . . . . . . . . . . . . . . .

20
. . . . . . . . . . . . . . . . . .

25 where the following apply:


Time [hours]
ml 0 0 ] - -4kc cos oL"
FIGURE 5.3 Roll(Dark) and Yaw (Light) Versus Time, LQG Control Ms = 0 m2 0 ] , B~ = 0
0 0 m3 0
torque is required. It is important to note that despite the cl + q -q 0
external disturbances, RS control law produces good perfor- -c2 c2 + c3 -c3

o]
Cs = , r$ ~
mance. 0 -c3 C3
To compare the results with the well-known LQG controller,
k~ + k2 -k2
the system was simulated with an LQG controller. The 0
approached infinity in equation 5.36. Note that equation 5.36 K, = -k2 k2+k3 -k 3 .

becomes a classical Riccati equation as 0 goes to infinity. This 0 -k3 k3


is shown in Figure 5.3. Note that in LQG case, it takes longer
for yaw and roll angles to fall below 0.1 degree, and the The mi, ci, ki are the mass, damping, and stiffness, respectively,
variation in the angles are larger than the RS case. Thus, in associated with the ith floor of the building. The kc is the
this sense, RS controller outperforms the LQG controller. stiffness of the tendon. The Brownian motion w(t) with
E{dw(t)} = 0 and E{dw(t)dw'(t)} = Wdt; in this example,
W = 1.00 x 2~r in2/sec 3. The parameters were chosen to

x3(t)
m3
160
k3 , c 3
150

140

x2( t) 130
m2
120
k2,c2 q-
110
>
100
90

80
k l , Cl
70
i i i i i i i i
60 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
Gamma
FIGURE 5.4 Schematic Diagram for Three Degree-of-Freedom
Structure FIGURE 5.5 Optimal Variance: Full-State Feedback, MCV, 3DOF
5 Cost-Cumulant and Risk-Sensitive Control 1067

0.02 i i i i i i i ~ i

t~
0.019
E
o~ 0.018

0.017
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

0'03cI~ '. ! ! ! ! ! ! !
I

oo15
t ........................................
i ; i ; i . . . .
:o+
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

o.o~| ! ! ! ! ! ! ! ! ! /

................. : ........................ 1
°.°3
oo21
r , , , , , , :, , , I
' 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

E
co
o.~
t oi
0
o .........
i
0.2
i
0.4
i
0.6
i
0.8
i
1 1.2
i
1.4
'
1.6 1.8 2

Lf')
X

% ~o.~ ~ o!, o:o o; ~ ~:~ ~!4 ~!~ ,!~


i i

¢.O
X

E
0.7 ............. : : ........ :
09

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2


Gamma

FIGURE 5.6 Displacements and Velocities. Full-State Feedback, MCV, 3DOF

match modal frequencies and dampings of an experimental Figure 5.6 shows the RMS displacement responses of first
structure. The cost criterion is given by: (~xl), second (~x2), and third ((Yx3)floor and the RMS velocity
responses of first (Crx4), second (Crx~), and third (Crx6) floor,
respectively, versus the MCV parameter % It is important to
l = J(z'(t)K~(t) + kcu2(t))dt, note that both third floor RMS displacement and velocity
responses can be decreased by choosing large %

together with R = kc, where z is a vector of floor displacements


and x = [z ~]'. 5.7 Conclusions
Figure 5.5 shows that the variance of the cost criterion
decreases as y increases. Note that the ~ = 0 point corresponds This chapter describes linear-quadratic-Gaussian (LQG), min-
to the classical LQG case. imal cost variance (MCV), and risk-sensitive (RS) controls in
1068 Chang-Hee Won

terms of the cost cumulants. Cost cumulant control, which is Kwakernaak, H., and Sivan, R. (1972). Linear optimal control systems.
also called statistical control, views the optimization criterion New York: John Wiley & Sons.
as a random variable and minimizes any cumulant of the Rhee, I., and Speyer, J. (1992). Application of a game theoretic
optimization criterion. Then LQG, MCV, and RS are all special controller to a benchmark problem. Journal of Guidance, Control,
and Dynamics 15(5), 1076-1081.
cases of cost cumulant control where in LQG the mean, in
Runolfsson, T. (1994). The equivalence between infinite-horizon op-
MCV the variance, and in RS all cumulants of the cost func-
timal control of stochastic systems with exponential-of-integral
tion are optimized. This chapter provides the optimal control- performance index and stochastic differential games. IEEE Transac-
lers for the LQG, MCV, and RS methods. Finally, satellite tions on Automatic Control 39(8), 1551-1563.
attitude control application using RS controller and building Sain, M.K., and Liberty, S.R. (1971). Performance measure densities
control application using MCV controller are described. for a class of LQG control systems. IEEE Transactions on Automatic
Control AC-16 (5), 431-439.
Sain, M.K., Won, C.H., and Spencer, Jr., B.E (1992). Cumulant mini-
References mization and robust control. In Duncan, T.E., and Pasik-Duncan,
Anderson, B.D.O., and Moore, J.B. (1989). Optimal control, linear B. (Eds.) Stochastic Theory and Adaptive Control Lecture Notes in
quadratic methods. Englewood Cliffs, NJ: Prentice Hall. Control and Information Services 184 Berlin: Springer-Veflag, pp.
Basar, T., and Bernhard, P. (1991). H~-optimal control and related 411-425.
minimax design problems. Boston: Birkhauser. Sain, M.K., Won, C.H., Spencer, Jr., B.F., and Liberty, S.R. (2000).
Bensoussan, A. (1992). Stochastic control of partially observable sys- Cumulants and risk-sensitive control: A cost mean and variance
tems. Cambridge: Cambridge University Press. theory with application to seismic protection of structures. In l.A.
Davis, M.H.A. (1977). Linear estimation and stochastic control. Filar, V. Gaitsgory, and K. Mizukami (Eds.), Advances in Dynamic
London: Halsted Press. Games and Applications, Annals of the International Society of
Fleming, W.H., and Rishel, R.W. (1975). Deterministic and stochastic Dynamic Games, Vol. 5. Boston: Birkhauser.
optimal control. New York: Springer-Verlag. Uchida, K., and Fujita, M. (1989). On the central controller: Charac-
Glover, K., and Doyle, J.C. (1988). State-space formulae for all stabil- terizations via differential games and LEQG control problems.
izing controllers that satisfy Ho~-norm bound and relations to risk Systems and Control Letters 15(1), 9-13.
sensitivity. Systems and Control Letters 11, 167-172. Whittle, P. (1996). Optimal control, basics and beyond. New York: John
Jacobson, D.H. (1973). Optimal stochastic linear systems with expo- Wiley & Sons.
nential performance criteria and their relationship to deterministic Won, C.H. (1995). Cost cumulants in risk-sensitive and minimal cost
differential games, IEEE Transactions on Automatic Control, AC-18, variance control, Ph.D. Dissertation. University of Notre Dame.
124-131.

Potrebbero piacerti anche