Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Hyo-Sung Ahn
P
k
= A
k1
P
k1
A
T
k1
+GQG
T
(5)
Correction:
K
k
=
P
k
C
T
k
(C
k
P
k
C
T
k
+R)
1
x
k
= x
k
+K
k
( z
k
C
k
x
k
)
P
k
= (I K
k
C
k
)
P
k
(6)
where P
k
is the covariance of the estimated state error, K
k
is Kalman gain matrix, stands for the propagated state, and
stands for the corrected state (corrected by the Kalman
gain matrix).
C. Stochastic SVILC Frameworks
Now, we show that SVILC can be designed based on the
above discrete Kalman lter structure along the iteration
axis. It is well known that SVILC is updated by the
following mechanism [3]:
U
k+1
= U
k
+
k
E
k
(7)
Y
k
= HU
k
(8)
where U
k
R
n
is the control input at k
th
iteration;
k
R
nn
is the learning gain matrix; E
k
R
n
is
the error vector at the k
th
iteration, which is calculated
by E
k
= Y
d
Y
k
; H R
nn
is Markov matrix, which
is Toeplitz; and Y
k
R
n
is the output. Notice that in (4)-
(6), k represents the time point, but in (7)-(8), k represents
the iteration axis. It is a standard lifting technique in ILC
to transform a system operated repetitively as per the ILC
paradigm into a multivariable system of the form (7)-(8),
where the iteration index can be viewed as a discrete-time
index. We now discuss extending that representation into
the form of (4)-(6).
To continue, noting that Y
k+1
= HU
k+1
and dening
u
k
U
k+1
U
k
, we obtain the following relationship:
E
k+1
= E
k
H
u
k
(9)
Let us suppose that the measured state at the k
th
iteration
is Y
k
and during this measurement there exists measure-
ment noise w
k
with w
k
N(0, R). Now introducing the
measurement noise w
k
, we have
E
k
= Y
d
Y
k
w
k
= E
k
w
k
. (10)
In (9),
u
k
= U
k+1
U
k
and from (7), since the measured
error is used for control update, we obtain U
k+1
U
k
=
k
E
k
. Let us suppose that during the control law update,
there is process noise v
k
, which is zero-mean Gaussian
process noise with covariance Q
, i.e., v
k
N(0, Q
).
Then, by writing U
k+1
= U
k
+
k
E
k
+ v
k
, the control
update law is found as
U
k+1
= U
k
+
k
(E
k
w
k
) +v
k
= U
k
+
k
E
k
k
w
k
+v
k
(11)
Now, denoting
k
w
k
+ v
k
v
k
, we have U
k+1
=
U
k
+
k
E
k
+v
k
, which yields
u
k
=
k
E
k
+v
k
. Now, from
E
_
(
k
w
k
+v
k
)(
k
w
k
+v
k
)
T
=
k
E(w
k
w
T
k
)
T
k
+
E(v
k
v
T
k
) =
k
R
T
k
+ Q
= Q , we simply consider v
k
as zero-mean Gaussian process noise with covariance Q,
i.e., v
k
N(0, Q). Next, denoting
k
E
k
V
k
, we have
E
k+1
= E
k
HV
k
Hv
k
. (12)
Hence, from (10) and (12), the noise-driven ILC system can
be formulated in state-space form along the iteration axis
as:
E
k+1
= AE
k
+BV
k
+Gv
k
E
k
= CE
k
+w
k
. (13)
251
The following technical assumption is used throughout the
paper.
Assumption 2.1: In (10)-(13), it is assumed that the pro-
cess and measurement noises v
k
and w
k
are zero-mean
white Gaussian noises on the iteration axis.
2
Remark 2.1: It is worthwhile to note that the measure-
ment noise w
k
may come from the initial state reset error.
When x
k
(0) is not xed,
3
(8) should be changed as:
Y
k
= HU
k
+Wx
k
(0) (14)
where x
k
(0) is the initial state and W =
[v
T
1
, v
T
2
, , v
T
n+1
]
T
with v
i
= CA
i
. Then, we can
write x
k
(0) = x
k
(0) +x
k
(0) where x
k
(0) is the nominal
initial state and x
k
(0) is the initial state error. Then,
as done in [22], assuming zero-mean Gaussian x
k
(0),
simply we can dene Wx
k
(0) = w
k
. Thus, our stochastic
SVILC frame can effectively handle the initial state error
problem.
Next, by comparing (13) to (4) and matching A = I,
B = H, x
k
= E
k
, u
k
= V
k
, G = H, z
k
=
E
k
and C =
I, we can derive a standard Kalman lter in the iteration
domain given by:
Propagation:
E
k
= A
E
k1
+BV
k1
P
k
= A
P
k1
A
T
+GQG
T
(15)
Correction:
K
k
=
P
k
C
T
(C
P
k
C
T
+R)
1
E
k
=
E
k
+K
k
( z
k
C
E
k
)
P
k
= (I K
k
C)
P
k
(16)
Finally, with A = C = I, B = H, V
k
=
E
k
,
G = H, we have the following recursive update formula
for stochastic SVILC:
E
k
=
E
k1
H
E
k1
(17)
P
k
=
P
k1
+HQH
T
(18)
K
k
=
P
k
(
P
k
+R)
1
(19)
E
k
=
E
k
+K
k
( z
k
E
k
) (20)
P
k
= (I K
k
)
P
k
(21)
where z
k
=
E
k
. Observe that in the algorithm (17)-(21),
the inputs are
E
k1
and z
k
, and the output is
E
k
.
III. MAIN RESULTS
In this section, it will be shown that from the developed
algorithm the base-line error of SVILC system along the
iteration axis can be estimated. To obtain this result we need
to show that
P
k
converges as k . For this purpose, the
following lemmas are developed rst.
2
Observe, in our paper, it is assumed that zero-mean Gaussian noise is
on the iteration axis. So, in SVILC, assuming zero-mean Gaussian noise
on the time axis is a redundancy.
3
In ILC, as written in (3), generally it is required that the initial state
is xed at the same place. However, in practice, it could be different each
iteration
Lemma 3.1: If the covariance matrix of the process noise
vector is given as Q
ii
= 0 if i = j, Q
ij
= 0, if i = j and
CB is full rank, then HQH
T
is nonsingular.
Proof: If CB is full rank, H is nonsingular and Q is
positive denite. So, from det(HQH
T
)=detHdetQdetH
T
,
since detH = 0 and detQ = 0, we have det(HQH
T
) = 0.
Therefore, HQH
T
is nonsingular.
Lemma 3.2: Under the same conditions as Lemma 3.1,
if A is any covariance matrix, then A+HQH
T
is positive
denite (p.d), i.e., A+HQH
T
> 0.
Proof: Since A+HQH
T
is symmetric, by denition
of p.d, if x
T
(A +HQH
T
)x > 0 for all nonzero vector x,
then A + HQH
T
is positive. Since any covariance matrix
is positive or at least semi-positive denite, x
T
Ax 0.
Therefore, if x
T
(A +HQH
T
)x > 0, then A +HQH
T
is
positive denite. Let us make the following relationship:
x
T
(A+HQH
T
)x = x
T
Ax +x
T
HQH
T
x
= x
T
Ax +y
T
Qy
where we used y = H
T
x. Here noticing Q > 0, because it
is diagonal matrix with zero off-diagonal terms, y
T
Qy > 0
for all nonzero y. However, nonzero condition y = 0 is
enforced. Now, we have to prove that y = 0 if and only if
x = 0. It is easy to see that column vectors and row vectors
are linear independent because H matrix is nonsingular.
Hence, from
H
T
x = p
1
x
1
+p
2
x
2
+ +p
n
x
n
where [p
1
, p
2
, , p
n
] = H
T
and [x
1
, x
2
, , x
n
]
T
= x,
we know that only trivial solution x
1
= x
2
= = x
n
= 0
makes H
T
x = 0. Therefore, we know that for all nonzero
y, y
T
Qy > 0. This completes the proof.
Now, using lemmas given above, let us develop the
following theorem.
Theorem 3.1: If there exists a solution X for the follow-
ing algebraic Riccati equation (ARE):
AX +XAXBX +C = 0 (22)
with A =
1
2
I, B = (HQH
T
)
1
and C = R, then
lim
k
P
k
=
P
X HQH
T
.
Proof: The proof can be performed using Lemma 3.1
and Lemma 3.2. Due to page limitations, we omit a detailed
proof. For more detail, refer to [23].
In the next theorem, we show that there always exists a
unique solution of ARE of Theorem 3.1. For this proof, we
use a well-known existence condition for continuous ARE,
which is summarized in the following lemma.
Lemma 3.3: [24] For the following ARE
A
T
K +KAKBB
T
K +Q = 0
with Q = C
T
C, if (A,B) is stabilizable and (A,C) is
observable, then there exists a unique K = K
T
> 0.
Theorem 3.2: In stochastic SVILC, if CB is full rank,
then there always exists a unique p.d solution X of (22).
252
Proof: Let us dene matrices
V
ii
=
1
Q
ii
, V
ij
= 0, when i = j
U
ii
=
_
Q
ii
, U
ij
= 0, when i = j
A :=
1
2
I; B := (H
1
)
T
V ; C := U
Then, we know that there always exists a matrix K such that
(A+BK) < 0, because B is invertible. Also since A and
C are nonzero diagonal matrices, obviously it is observable.
Therefore, by Lemma 3.3, there exists a positive denite
solution X.
The following remark is provided regarding future re-
search w.r.t. the ARE of (22).
Remark 3.1: From [25], in addition to the conditions of
Lemma 3.3, if (A,B) is controllable, then we always have
(ABB
T
K) < 0. Since (A, B) is also controllable, we
have
_
1/2I (HQH
T
)
1
X
_
< 0 (23)
Then, using (23), we can nd an analytical condition for
XHQH
T
> 0. But, this remains a topic for future work.
The condition for convergence is always an important
consideration in ILC. The convergence condition of the
stochastic SVILC system can be established and based
on this convergence condition, the base-line error of the
stochastic SVILC can be subsequently estimated. Further-
more, under more restrictive conditions it can be shown that
as a special case, the ILC convergence could be monotonic.
In the remainder of this section we rst give a theorem that
provides an upper bound for the base-line error and we
then give a result that establishes conditions for monotonic
convergence. Before further proceeding, for the analytical
solution, we need a denition regarding the norm of a noise
vector.
Denition 3.1: The stochastic norm of the measurement
noise w
k
, (w
k
N(0, R)), which is a zero-mean white
Gaussian process, is dened as w
k
=
_
n
i=1
R
ii
, where
R
ii
represents diagonal term of noise covariance matrix.
4
In the following theorems, without notational confusion,
when we use terminology in a stochastic sense, it implies
that the stochastic norm is used. Also, note that the symbol
is used to denote the L
2
norm.
Theorem 3.3: Dening S
P
n
i=1
(
P
)
ii
, if I
H < 1, the estimated base-line error converges, in
stochastic sense, within a upper bound given by:
1
1 I H
_
_
(I K
)
1
K
_
_
_
S
P
+w
k
_
where K
(
P
+R)
1
and
P
=
P
+HQH
T
.
Proof: Due to page limitations, we omit the proof. For
more detail, refer to [23].
4
In fact, it is not possible to dene a norm for Gaussian noise processes,
because the L
2
or L norms could be unbounded. But, using the expected
values of the measurement noise enables us to bound the norm of noise
vector stochastically. This assumption is practically acceptable due to
Remark 2.1.
Remark 3.2: Theorem 3.3 shows that as K
0,
=
P
+HQH
T
and
K
=
P
(
P
+R)
1
, as
P
is known and
P
k
bounds E
k
E
k
, the upper bound of the actual error
can be estimated by: E
k
E
k
+S
P
.
The following is for monotonic convergence.
Theorem 3.4: Let us suppose that the error covariance
matrix is in steady-state, and iteration is at (k 1)
th
trial.
If S
P
+w
k
<
E
k1
and
I H +
_
_
(I K
)
1
K
_
_
< 1,
then the estimated error vector can, in stochastic sense, be
monotone convergent, i.e.,
E
k
<
E
k1
.
Proof: Due to page limitations, we omit the proof. For
more detail, refer to [23].
Remark 3.4: Theorem 3.4 also shows that as K
0,
the monotonic convergence condition can be easily satised.
Further, we see that the learning gain matrix = H
1
is
best for monotonic convergence of the stochastic SVILC
system, as is the case in all rst-order ILC algorithms.
Theorem 3.4 provides an important design strategy. Let
us suppose that we have designed an ILC learning gain
such that
I H +
_
_
(I K
)
1
K
_
_
< 1
and further suppose that the system is in steady-state. Then,
at the (k 1)
th
trial, although estimation error
E
k1
is
very big due to unexpected noises (more than S
P
+w
k
=
S
P
+
_
n
i=1
R
ii
), the estimated error is enforced to
decrease at the k
th
iteration trial, which is also an expected
result from Theorem 3.3 in stochastic sense. Thus, from
these observations, we make a nal remark:
Remark 3.5: From Theorem 3.3, it was shown that
E
k
and E
k
can be upper-bounded, and from Theorem 3.4,
when
E
k
is bigger than a specied value (but after the
rst convergence to the specied value is achieved), the
monotonic convergence condition is enforced. But, both
theorems are associated with the learning gain matrix .
So, by properly designing off-line, a specied design
requirement can be achieved.
IV. EXAMPLE
In this section, an example is provided to illustrate the
validity of the suggested Kalman lter augmented SVILC
scheme. The following discrete system is used, which was
253
given in [26]:
x
t+1
=
_
_
0.50 0.00 0.00
1.00 1.24 0.87
0.00 0.87 0.00
_
_
x
t
+
_
_
1.0
0.0
0.0
_
_
u
t
(24)
y
t
= [ 2.0 2.6 2.8 ] x
t
, (25)
with poles at [ 0.62 +j0.62, 0.62 j0.62, 0.50 ] and ze-
ros at [ 0.65, 0.71 ]. In this test, we assume a zero initial
condition and 10 discrete time points. The desired repetitive
reference trajectory is Y
d
(j) = 5 sin(8.0(j 1)/10), j =
1, , 10. As explained in [26], (24)-(25) can be changed
into the SVILC form (for detailed explanation, refer to
[26]). The same learning gain matrix as used in [26]
is used, which was determined by an optimization method
based on Lyapunov stability analysis. For random Gaussian
noise, the Matlab command randn is used, with the co-
variance of v
k
taken as 1.0e 4 and the covariance of w
k
taken as 1. That is, diag(Q
is very
small. Fig. 1-Fig. 3 show the test results.
In Fig. 1, the left sub-gure shows the norms of tracking
error E
k
= Y
d
Y
k
w.r.t. iteration number for cases with and
without a Kalman lter along iteration axis. The solid line
is the calculated upper boundary of E
k
. The right sub-gure
shows the norms of
E
k
and the calculated upper boundary
of
E
k
w.r.t iteration number. The left gure includes the
test result both with the Kalman lter and without the
Kalman lter. As shown in this gure, with the Kalman
lter, the base-line error is signicantly reduced. The right
sub-gure also shows the calculated upper bound of the
estimated base-line error (dashed-dot line, 0.9088) from
Theorem 3.3. It can be observed that the actual base-line
error is well-bounded by the previously-calculated upper-
boundary values.
Fig. 2 shows the covariance matrix of the estimated
error vector. The left-top is the 3-D representation of actual
the P
k
(P
k
is 10 10 matrix). The right-top is the 3-D
representation of the estimated P
k
(P
k
). The left-bottom is
the 3-D representation of P
k
P
k
. The right-bottom(top)
is the norm of covariance of the estimated error
P
k
w.r.t.
iteration number; The right-bottom(bottom) is
P
k
P
k
.
As shown in these gures,
P
k
converges to the previously
calculated P
k
accurately. In particular, the right-bottom
gures show
P
k
quantitatively, because we used
P
k
. As
shown in these gures, as iteration number increases, the
error of the estimated state (in this paper, E
k
= E
k
E
k
)
decreases and after the 15
th
iteration, there is a steady-state
base-line error. As shown in the bottom gure, after the
15
th
iteration,
P
k
P
k
is almost zero. This means that
the previously-estimated P
k
is very accurate and reliable.
Fig. 3 shows the estimated error
E
k
and actual error E
k
w.r.t time and iteration. The left-top is the 3-D representa-
tion of the actual E
k
. The right-top is the 3-D representation
of the actual
E
k
. The bottom is the 3-D representation of
E
k
E
k
. As shown in these gures, the suggested algorithm
estimated E
k
reliably even though it is not perfect. The
bottom difference corresponds to the difference between the
left gure of Fig. 1 and the right gure of Fig. 1.
0 10 20 30 40 50 60 70 80 90 100
0
1
2
3
4
5
6
7
8
9
10
11
Iteration Trial
l2
n
o
rm
e
rro
rs
o
f E
k =
Y
d
- Y
k
Without Kalman Filter
With Kalman Filter
Calculated upper boundary
Calculated upper boundary of the actual error =1.4927
0 10 20 30 40 50 60 70 80 90 100
0
1
2
3
4
5
6
7
8
9
10
11
Iteration Trial
l2
n
o
rm
e
rro
rs
Estimated error with Kalman Filter
Calculated estimated error upper boundary
Calculated upper boundary of the estimated error = 0.9088
Fig. 1. Left: Norms of tracking error E
k
= Y
d
Y
k
w.r.t. iteration
number for cases with and without KF in iteration axis. The solid line
is the calculated upper boundary of E
k
. Right: Norms of
E
k
and the
calculated upper boundary of
E
k
w.r.t iteration number.
0
2
4
6
8
10
0
2
4
6
8
10
-0.05
0
0.05
P
k
0
2
4
6
8
10
0
2
4
6
8
10
-0.05
0
0.05
P
k
0
2
4
6
8
10
0
2
4
6
8
10
-1
0
1
x 10
-3
P
k
- P
k
0 10 20 30 40 50 60 70 80 90 100
0
0.5
1
Iteration Trial
N
o
rm
o
f P
k
0 10 20 30 40 50 60 70 80 90 100
0
0.2
0.4
0.6
0.8
1
Iteration Trial
N
o
rm
o
f P
k
-P
k
Fig. 2. Left-top: 3-D representation of actual P
k
(P
k
is 1010 matrix).
Right-top: 3-D representation of estimated P
k
(P
k
). Left-bottom: 3-D
representation of P
k
P
k
). Right-bottom: (top) Norm of covariance of
the estimated error
P
k
w.r.t. iteration number; (bottom)
P
k
P
k
.
V. CONCLUSIONS AND FINAL REMARKS
In this paper, a new Kalman lter scheme for the SVILC
system was developed. Our new method is expected to
provide an effective ILC design scheme for systems with
measurement noise with little extra implementation cost.
Through a numerical example the validity of our method
was illustrated. The key idea of the suggested method is
that the learning gain matrix can be determined a priori,
but the output error is estimated on-line. As the main
contributions of this paper, it was shown that the base-
line error of the uncertain ILC system can be signicantly
reduced by the suggested stochastic ILC scheme and the
upper bound of the estimated error (and the actual error)
can be calculated a priori, given the input and measurement
noise covariances. From the fact that our algorithm uses a
learning gain matrix that is computed ofine, our results
are signicantly different from the existing stochastic ILC
algorithms. Furthermore, in Remark 2.1, we discussed the
254
0
5
10
0
20
40
60
80
100
-5
0
5
Iteration axis
3-D representation of E
k
(t)
Time axis
E
k
0
5
10
0
20
40
60
80
100
-5
0
5
Iteration axis
3-D representation of estimated E
k
(t)
Time axis
E
s
tim
a
te
d
E
k
0
5
10 0
20
40
60
80
100
-1
-0.5
0
0.5
1
Iteration axis
Time axis
E
k
- E
s
tim
a
te
d
E
k
Fig. 3. Left-top: 3-D representation of actual E
k
. Right-top: 3-D
representation of actual
E
k
. Bottom: 3-D representation of actual E
k
E
k
.
possibility of using the suggested scheme to handle the
initial reset problem. To relax the assumption requiring
knowledge of the H matrix, two nal remarks are provided:
Remark 5.1: Even though in Theorem 3.3 we required
the condition I H < 1, this condition can be relaxed
as (I H) < 1, where () is the spectral radius. In
such case, the system will be bounded input bounded output
stable (BIBO). Note again, if we only require (I H) <
1, then we do not need to know A matrix. That is, as proved
in Theorem 3.2, if CB is full rank, there exists a steady-state
P
k
as k , and if CB is full rank, we can then satisfy
the condition (IH) < 1. Therefore, without knowledge
of A, the suggested stochastic ILC scheme guarantees the
steady-state error and the BIBO stability.
Remark 5.2: In this paper, we assumed that Markov
matrix H is known and based on this assumption, we
have developed algorithm. Although this approach has
been widely used in ILC [10], [12], [13], [14] with some
exceptions (e.g., [8], [11] and can be relaxed as commented
in Remark 5.1, in order to conform the main purpose of the
ILC algorithm, we may have to consider an uncertain (or
unknown) H matrix. In this case, as commented in [8],
we need to identify the model. For this purpose, various
methods could be considered as done [8], [14]. One easily-
implemented solution is to use a Wiener lter or least square
method, as suggested in our earlier paper [27].
REFERENCES
[1] M. Uchiyama, Formulation of high-speed motion pattern of a
mechanical arm by trial, Trans. SICE (Soc. Instrum. Contr. Eng.),
vol. 14, no. 6, pp. 706712(in Japanese), 1978.
[2] S. Arimoto, S. Kawamura, and F. Miyazaki, Bettering operation of
robots by learning, J. of Robotic Systems, vol. 1, no. 2, pp. 123140,
1984.
[3] K. L. Moore, Iterative learning control for deterministic systems,
Advances in Industrial Control. Springer-Verlag, 1993.
[4] Zeungnam Bien and Jian-Xin Xu, Iterative Learning Control -
Analysis, Design, Integration and Applications, Kluwer Academic
Publishers, 1998.
[5] YangQuan Chen and Changyun Wen, Iterative Learning Control:
Convergence, Robustness and Applications, vol. LNCIS-248 of Lec-
ture Notes series on Control and Information Science, Springer-
Verlag, London, 1999.
[6] Y. Q. Chen and K. L. Moore, Iterative learning control with
iteration-domain adaptive feedforward compensation, in Proceed-
ings of the 42nd IEEE Conference on Decision and Control, Maui,
Hawaii USA, Dec. 2003, pp. 4416 4421.
[7] Kevin L. Moore, YangQuan Chen, and Hyo-Sung Ahn, Algebraic
H design of higher-order iterative learning controllers, in Pro-
ceedings of the 2005 IEEE International Symposium on Intelligent
Control, Limassol, Cyprus, 2005, pp. 1207 1212.
[8] S. S. Saab, On a discrete-time stochastic learning control algorithm,
IEEE Trans. on Automatic Control, vol. 46, no. 8, pp. 13331336,
2001.
[9] S. S. Saab, A discrete-time stochastic learning control algorithm,
IEEE Trans. on Automatic Control, vol. 46, no. 6, pp. 877887,
2001.
[10] K. L. Moore, An iterative learning control algorithm for systems
with measurement noise, in Proceedings of the 38th IEEE Confer-
ence on Decision and Control, Phoenix, AZ USA, Dec. 1999, pp.
270 275.
[11] H. F. Chen and H. T. Fang, Output tracking for nonlinear stochastic
systems by iterative learning control, IEEE Trans. on Automatic
Control, vol. 49, no. 4, pp. 583588, 2004.
[12] Kwang Soon Lee and Jay H. Lee, Constrained model-based
predictive control combined with iterative learning for batch or
repetitive processes, in Proceedings of the 2nd Asian Control
Conference, Seoul, Korea, July 22-25 1997, ASCC.
[13] K. S. Lee and J. H. Lee, Convergence of constrained model-based
predictive control for batch processes, IEEE Trans. on Automatic
Control, vol. 45, no. 10, pp. 19281932, 2000.
[14] M. Norrl of, An adaptive iterative learning control algorithm with
experiments on an industrial robot, IEEE Trans. on Robotics and
Automation, vol. 18, no. 2, pp. 245251, 2002.
[15] Kwang-Hyun Park and Zeungnam Bien, Intervalized iterative
learning control for monotone convergence in the sense of sup-norm,
in Proceedings of the 3rd Asian Control Conference, Shanghai,
China, 2000, ASCC, pp. 28992903.
[16] Kwang-Hyun Park and Zeungnam Bien, A study on iterative
learning control with adjustment of learning interval for monotone
convergence in the sense of sup-norm, Asian Journal of Control,
vol. 4, no. 1, pp. 111118, 2002.
[17] M. Norrl of and S. Gunnarsson, Time and frequency domain
convergence properties in iterative learning control, Int. J. of
Control, vol. 75, no. 14, pp. 11141126, 2002.
[18] Kevin L. Moore, YangQuan Chen, and Vikas Bahl, Monotonically
convergent iterative learning control for linear discrete-time systems,
To appear in Automatica, 2005.
[19] Richard W. Longman, Iterative learning control and repetitive
control for engineering practice, Int. J. of Control, vol. 73, no.
10, pp. 930954, 2000.
[20] D. H. Owens, E Rogers, and K. L. Moore, Analysis of linear
iterative learning control schemes using repetitive process theory,
Asian Journal of Control, vol. 4, no. 1, pp. 6889, 2002.
[21] M. Q. Phan, R. W. Longman, and K. L. Moore, Unied formulation
of linear iterative learning control, in AAS/AIAA Space Flight
Mechanics Meeting, Clearwater Florida, Jan. 2000, pp. AAA 00
106.
[22] S. S. Saab, Stochastic P-type/D-type iterative learning control
algorithms, Int. J. of Control, vol. 76, no. 2, pp. 139148, 2003.
[23] Hyo-Sung Ahn, Robust and Adaptive Learning Control Design in
the Iteration Domain, Ph.D. thesis, Utah State University, Logan,
Utah, USA, May 2006.
[24] David F. Delchamps, Analytic feedback control and the algebraic
riccati equation, IEEE Trans. on Automatic Control, vol. 29, no. 11,
pp. 10311033, 1984.
[25] Jan C. Willems, Least squares stationary optimal control and the
algebraic riccati equations, IEEE Trans. on Automatic Control, vol.
16, no. 6, pp. 621634, 1971.
[26] Hyo-Sung Ahn, Kevin L. Moore, and YangQuan Chen, Schur
stability radius bounds for robust iterative learning controller design,
in Proceedings of the 2005 American Control Conference, Portland,
OR, 2005, pp. 178 183.
[27] Hyo-Sung Ahn, Kevin L. Moore, and YangQuan Chen, Monotonic
convergent iterative learning controller design with iteration varying
model uncertainty, in Proceedings of the 2005 IEEE International
Conference on Mechatronics and Automation, Niagara Falls, Canada,
2005, pp. 572577.
255