Sei sulla pagina 1di 6

Searching the Optimal Order for High Order Models - SISO Case

Vitor Alex Oliveira Alves, Rodrigo Juliani Correa de Godoy and Claudio Garcia
AbstractThe goal of System Identication is to obtain a
reduced order model capable of reproducing the key features
of the true system behavior. Some system identication methods
require, as a starting point, a high order linear model of the
system. Such model is then adopted as the system most reliable
description and, based on it, the desired reduced order model
is derived. In this scenario, one could ask what is the best
order of the high order model?. The reply to this question is
not addressed in the specialized literature. This paper presents
an optimization strategy that maximizes the t index to deal
with this subject.
I. INTRODUCTION
The identication methods that employ high order models
use them in different ways. For instance, the ASYM method
[1] considers an ARX (Auto-Regressive with eXogenous
inputs) high order model as the true description of the
system. Afterwards, using a frequency domain criterion, this
method reduces the model order to obtain a low order model
suitable for control applications. The two-step joint input-
output identication methods, such as the u-ltering [2], y-
ltering [3] and the Double Filtering [4] methods, employ
FIR (Finite Impulse Response) high order models to describe
the input and output sensitivity functions [5] - or the closed-
loop transfer function (exclusively for the Double Filtering
method [4]). There is a MRI (MPC Relevant Identication)
method, proposed in [6] and [7], that uses a high order
FIR model as the true process description. Another MRI
method proposed in [8] employs, also as the true process
most faithful description, a high order ARX model. The
preference for a high order ARX structure is due to the
fact that such models are practically unbiased [9] and their
parameter estimation has analytical solution.
None of these studies were worried about the determina-
tion of the best high order to be used. Instead, they rely on
xed values, such as 15 or 20 [2], 24 [9], 20 or 25 [10], 30 or
40 [11]. Also, it is not mentioned the number of zeros asso-
ciated with the models. As a rule of thumb, it is considered
equal to the number of poles minus one (see Example 1). It
is common that the obtained high order models are unstable
[specially when the identication method applied is non lin-
ear in the parameters to be estimated, such as the Prediction
Research project sponsored and nancially supported by Petrobras SA
under process 0050.0058968.10.9/2010.
V. A. O. Alves is with Department of Mechanical Engineering, Mau a
Institute of Technology, Praca Mau a 01, S ao Caetano do Sul, SP, Brazil -
CEP 09580-900. vitoralex.alves@maua.br
R. Juliani C. G. and C. Garcia are with Department of Telecommu-
nications and Control Engineering, Polytechnic School of the University
of S ao Paulo, Av. Prof. Luciano Gualberto, 158, trav. 3, Butant a, S ao
Paulo, SP, Brazil - CEP 05508-970. rodrigo.juliani@usp.br,
clgarcia@lac.usp.br
Error Method (PEM) with Output-Error (OE), Box-Jenkins
(BJ) or Auto-Regressive Moving Average with eXogenous
inputs (ARMAX) model structures] or present near pole and
zero cancellations. Besides, in practical situations - given a
(relatively small) experimental dataset - several identication
algorithms exhibit convergence problems when the adopted
model order is too high (or, equivalently, when there is an
elevated number of parameters to be estimated). This can
result in poor models. A frequency domain interpretation of
such fact can be found in [12].
In this paper, an optimization approach is proposed to nd
the best high order for an ARX model (adjusted for a given
input-output dataset), based on the maximization of the t
index [13], dened as:
f it =100

1
y(t) y(t)
y(t) mean(y(t))

%, t = [1, , N], (1)


in which y(t) and y(t) are the process and the model output,
respectively, and N is the dataset length.
The remainder of this paper is organized as follows.
In Section II the problems of using indiscriminate high
orders are illustrated through examples. Section III presents
the proposed optimization strategy to solve the high order
determination problem. In Section IV, a simulated benchmark
is employed to test the optimization strategy, applying Monte
Carlo simulations. The obtained results are compared against
those derived by exhaustive search. The conclusions are
drawn in Section V.
II. PROBLEMS OF USING INDISCRIMINATE HIGH
ORDER
As previously mentioned, the adoption of an indiscrimi-
nate high order in the identication of a system can result,
in the worst case, in unstable models [11] - when OE, BJ or
ARMAX model structures are employed - even though the
plant is stable. However, with the use of high order ARX
model structures, the most common situation is the presence
of near pole and zero cancellations. This is illustrated in the
following example.
Example 1: Consider the system
y(t) = G(q
1
)u(t) +H(q
1
)e(t), (2)
in which G(q
1
) is a third-order process [14]
G(q
1
) =
0.00768q
1
+0.02123q
2
+0.00357q
3
11.9031q
1
+1.1514q
2
0.2158q
3
(3)
with a disturbance transfer function
H(q
1
) = (10.9q
1
)
1
. (4)
The model described in (2) was simulated with a unitary
amplitude GBN (Generalized Binary Noise) [15] sequence
of N = 15 t
s
= 15 26 = 390 points applied to its input
u(t). t
s
is the plant settling time and the dataset length N
is in accordance with [9]. The signal e(t) is white noise
with zero mean and variance , adjusted in order to provide
SNR (Signal to Noise Ratio) of 10 (in variance). An ARX
model with n
a
= n
b
= 40 and n
k
= 1 was estimated - n
a
is
the number of poles, n
b
is the number of zeros plus one
and n
k
is the dead time adopted in the model. The pole-zero
mapping of this model is shown in Fig. 1.
2 1 0 1 2 3 4 5
1
0.5
0
0.5
1
Fig. 1. Pole-zero mapping for a 40
th
order ARX model.
It is possible to note that all the poles are inside the unitary
circle (i.e., the model is stable) and that there are many
near pole and zero cancellations, conrming the previous
statement. One can also observe the occurrence of non-
minimum phase zeros, not present in (3).
Another issue concerns convergence problems, related
to the model parameter estimation, that may occur in the
identication process when using indiscriminate high orders.
A model with many parameters may have greater modeling
errors and consequently smaller ts than a model with fewer
parameters, since it may be more difcult to nd the optimal
parameters of a larger model, mainly when the number of
available input and output points is small. Thus, in practical
situations, in which the available experimental dataset is
relatively small, the convergence problems are aggravated.
An example of such phenomenon is presented next.
Example 2: The model (2) and the same dataset of
Example 1 were employed to derive 1600 ARX models,
with n
a
= 1, ..., 40, n
b
= 1, ..., 40 and n
k
= 1. Another 200
point GBN sequence, independent of the one employed in
the identication procedure, was applied to the plant input
- in the absence of disturbance - and also to each one of
the obtained models. Thus, the quality of each model was
measured in a cross-validation strategy. The t index (1)
was calculated in the generated disturbance-free scenario. A
graph showing the 1600 t values is presented in Fig. 2, in
which it is possible to detect, in this case, the slight decay
in this index associated with the increase in the number of
adjusted parameters in the estimated model (order increase).
An exhaustive search reveals that the maximum t value
in Fig 2 is 96.17%, obtained with [n
a
, n
b
] = [7, 5]. Table I
shows the t values observed with the high orders adopted
in the references cited in Section I. Its inspection reveals that
the adoption of an indiscriminate high order can result in a
sub-optimal ARX model, in terms of the t value.
Different realizations of e(t) result in different optimal
0
10
20
30
40
0
5
10
15
20
25
30
35
40
50
55
60
65
70
75
80
85
90
95
100
f
i
t

(
%
)
n
a
n
b
Fig. 2. Fit index for each one of the 1600 ARX models in Example 2.
TABLE I
OBSERVED FIT VALUES.
ARX structure [n
a
, n
b
] t %
[7,5] by exhaustive search 96.17
[15,15] as in [2] 91.28
[20,20] as in [2], [10] 83.66
[24,24] as in [9] 77.65
[25,25] as in [10] 76.61
[30,30] as in [11] 84.18
[40,40] as in [11] 78.32
order and, consequently, different maximum t values. This
fact shows that it is relevant to search for a best high order
for an ARX model for a given input-output dataset.
Example 3: In order to illustrate an extreme case,
in which an elevated number of parameters are not well
estimated when the available input-output dataset is too
small, Fig. 3 shows the effect of reducing the dataset length
to 300 points in the identication procedure. Again, 1600
ARX models were identied. It is noticeable in Fig. 3 the
abrupt decay in the t value for higher orders. In such
situation, the adoption of an ARX model structure [15,15] or
higher would be disastrous. One can also note the occurrence
of negative t values, indicating very poor models.
0
5
10
15
20
25
30
35
40
0
10
20
30
40
40
20
0
20
40
60
80
100
n
a
n
b
f
i
t

(
%
)
Fig. 3. Fit index for each one of the 1600 ARX models in Example 3.
III. PROPOSED OPTIMIZATION STRATEGY
The main objective of this study is to determine what is the
best high order (in the t sense) for an ARX model structure.
In this section, the optimization problem that deals with such
objective is formally postulated in subsection III-A. Then, a
solution to such problem is proposed in a two-step strategy
shown in subsection III-B.
A. Optimization Problem Postulation
The search for an adequate high order can be formally
stated as an optimization problem expressed by
max
n=[n
a
,n
b
]
f it( y(t), y
val
(t)), (5)
subject to:
model = ARX(y
ident
(t), u
ident
(t), n) (6)
y(t) = predict(model, y
val
(t), u
val
(t)) (7)
n N . (8)
In (5) n = [n
a
, n
b
] is the optimization variable composed
of the number of parameters in the ARX model structure.
The objective function f it( y(t), y
val
(t)) is dened as in (1),
replacing y(t) by y
val
(t), in which the subscript val denotes
the use of a cross-validation dataset. Equations (6)-(8) consti-
tute the set of constraints to which the optimization problem
is subject. Constraint (6) represents an ARX model derived
from the identication dataset {y
ident
(t), u
ident
(t)}, with the
number of parameters dened by n. Constraint (7) indicates
the determination of the predicted output for the ARX
model (6) and the cross-validation dataset {y
val
(t), u
val
(t)}
as dened in [12]:
y(t) =

H
1

Gu
val
(t) +[1H
1
] y
val
(t). (9)
In (9),

G =

G(q
1
) and

H =

H(q
1
) are the process and
disturbance descriptions associated with the ARX model (6).
Finally, constraint (8) indicates the admissible values for the
orders n = [n
a
, n
b
].
It is important to stress that the obtained order n = [n
a
, n
b
]
must provide a t index close to that of the global optimal
order n

= [n

a
, n

b
], but it is completely irrelevant if the order
itself is close to the global optimal n

. It is also relevant
to note that the cross-validation strategy was applied in (5)
because it is not so surprising that a model will be able
to reproduce the estimation data, [...] but the real test is
whether it will be capable of also describing fresh data-sets
from the process [12].
B. Optimization Problem Solution
To optimize (5), subject to (6)-(8), a two-step optimization
strategy is proposed. In the rst step, the problem is solved
with a global optimization algorithm, then, in the second
step, the solution is rened with a direct search algorithm.
Both steps are described next.
1) Global Optimization - Simulated Annealing: The op-
timization problem (5) requires a large computational effort
when solved by exhaustive search. Furthermore, it is a well
known fact that the search domain grows exponentially with
the increase in the dimension (number of inputs and outputs)
of the system to be identied [16]. Thus, it is desirable
to nd a solution that presents a t close to the optimum
in a feasible computational time. The Simulated Annealing
algorithm [17], [18] is employed in this task.
2) Direct Search Optimization - Pattern Search: The
Simulated Annealing algorithm nds a solution that presents
a performance near the optimum but, in most cases, is not
exactly the optimum. Hence, to rene such solution, it is
employed a direct search algorithm: the Pattern Search [19],
[20]. In conclusion, this second step moves the previous
solution closer to the optimum.
These algorithms were selected due to its widely appli-
cation to optimization problems and for being available in
computational tools employed in simulation, identication
and optimization, such as MATLAB

. This makes them


simple to be used without deep understanding of their
internal mechanisms. Any other optimization algorithm can
be employed on the solution of (5)-(8) in fact, other
algorithms were tested, and Simulated Annealing and Direct
Search were chosen for presenting a good performance for
this problem.
IV. APPLICATION EXAMPLE
The optimization strategy described in the previous section
is applied to the SISO system of Example 1. Monte Carlo
tests are employed to assert performance and robustness,
comparing the optimization results to those derived by
exhaustive search. In each of these tests there were 150
repetitions, with different realizations of e(t). The variance
of e(t) was set to provide four different SNRs (1, 3, 10
and 30 in variance). The input u(t) is a GBN sequence with
1000 points, of which 700 points were employed in the iden-
tication procedure. The t values presented in this section
were determined over all 1000 points. The results include two
types of t indexes: the rst one, noisy t, is calculated over
an input-output dataset corrupted by disturbances; the second
one, noise-free t, is calculated over the same dataset, but not
affected by disturbances. The noisy t represents the index
that may be estimated with real input-output dataset whereas
the noise-free t, an ideal index unattainable in practice,
evaluates just the quality of the process model.
In Fig. 4 the t distributions over all 150 experiments
are presented. For each iteration, the t values - noisy and
noise-free - corresponds to the best high order ARX model
found by exhaustive search and by the optimization strategy.
The optimization error, dened by the difference between the
exhaustive search and optimized ts, is also shown.
First of all, it can be noticed in Fig. 4 that the optimization
nds a high order n = [n
a
, n
b
] with a t index very close to
the global optimum. Evidently, the t values decay for lower
SNR scenarios. However, the optimization succeeds in all
SNR cases. It is important to notice that, as the noisy t
50 100 150
0
20
40
60
80
100
Iteration
F
I
T
(
%
)
FIT Indexes SNR = 1
50 100 150
5
0
5
10
Optimization Error
Iteration
F
I
T
(
%
)
50 100 150
0
20
40
60
80
100
Iteration
F
I
T
(
%
)
FIT Indexes SNR = 3
50 100 150
6
4
2
0
2
4
Optimization Error
Iteration
F
I
T
(
%
)
50 100 150
0
20
40
60
80
100
Iteration
F
I
T
(
%
)
FIT Indexes SNR = 10
50 100 150
1.5
1
0.5
0
0.5
Optimization Error
Iteration
F
I
T
(
%
)
50 100 150
0
20
40
60
80
100
Iteration
F
I
T
(
%
)
FIT Indexes SNR = 30
50 100 150
0.2
0
0.2
0.4
0.6
0.8
Optimization Error
Iteration
F
I
T
(
%
)
Fig. 4. Fit distribution (x marker: exhaustive search, continuous line: optimization, darker: noisy t, lighter: noise-free t).
is optimized, the noise-free t for the optimized high order
can result better than the same index calculated for the high
order model found by exhaustive search. This happens since
both indexes are not equivalent and the optimization of the
noisy t index can lead to a loss in the noise-free t index
due to the overt phenomenon [12]. In fact, an increase
in the number of parameters to be estimated can make the
obtained ARX model to describe dataset features caused by
disturbances rather than by the plant dynamics. Thus, there
can be a large improvement in the noisy-t index whereas
the noise-free t starts decreasing with the increase in the
number of the adjustable parameters.
Tables II and III show the accuracy of the optimization
method in terms of average, standard deviation and maxi-
mum error associated with noisy and noise-free t indexes
observed in each of the SNR scenarios. It is also shown
the percentage of experiments in which the optimization
converged to the global optimum, exactly and with a 1%
tolerance. As can be seen in Table II, the proposed opti-
mization method is highly reliable. Convergence to orders
with performance very close (1%) to the global optimum is
obtained in 89.33% cases for SNR 1 scenarios and in 100%
cases for better scenarios. It is also possible to notice that
noisy and noise-free t indexes indicate similar results, but
not exactly the same, as mentioned before
In Fig. 5 it is noticeable the resemblance between the his-
tograms associated with exhaustive search and optimization,
which synthesizes the conclusions obtained in Fig. 4. It is
also clear that the improvement in the SNR results in better
t values and lower standard deviations.
So far in the discussion, it was shown the validity of the
optimization strategy here proposed. Now the high order
associated with the best t is investigated. Fig. 6 shows
the distributions of the orders n = [n
a
, n
b
] associated with
the best (noisy) t values obtained by exhaustive search and
TABLE II
OPTIMIZATION ACCURACY INDICATORS (%) - NOISY FIT CASE.
SNR 1 SNR 3 SNR 10 SNR 30
Average error 0.3399 0.0044 0.0010 0.0007
Error standard deviation 0.7520 0.0164 0.0049 0.0037
Maximum error 5.7899 0.1124 0.0387 0.0294
Global optimum achieved 52.67 86.67 93.33 92.00
Near global optimum 89.33 100.0 100.0 100.0
TABLE III
OPTIMIZATION ACCURACY INDICATORS (%) - NOISE-FREE FIT CASE.
SNR 1 SNR 3 SNR 10 SNR 30
Average error -0.3458 0.0359 -0.0065 0.0036
Error standard deviation 2.5723 0.8012 0.1976 0.0754
Maximum error 10.3381 5.1736 0.7766 0.8529
Global optimum achieved 52.67 86.00 89.33 91.33
Near global optimum 67.33 96.67 99.33 100.0
optimization (see Fig. 4 and 5).
It shows that the best high order depends highly on the
input-output dataset employed in the identication procedure,
with maximum percentage occurrence of 28% for a particular
order. Also, [40,40] is the optimal order (in the t sense) only
in 7 of the 150 cases (4.67%). This result corroborates the
statements made in Example 2. Hence, the use of arbitrary
values for the order of an ARX model, as in [2], [9], [10]
and [11], does not guarantee that the obtained high order
ARX model is, in fact, the best description of the system (in
terms of the t index). Furthermore, as it was stated in this
study, such high order is dataset-dependent.
Finally, Fig. 7 shows the dispersion maps for the optimal
orders found by exhaustive search and optimization. Once
again, it can be seen that there is a large dispersion of the
optimal orders. Besides, the optimization converges more
often to the real global optimum order as the SNR improves,
although this is of little relevance, since it is the t index that
is optimized (see Fig. 4).
20 30 40 50 60 70 80 90 100
0
20
40
60
FIT Exhaustive Search
FIT (%)
S
N
R

1
O
c
c
u
r
r
e
n
c
e
s
20 30 40 50 60 70 80 90 100
0
20
40
60
FIT Optimization
O
c
c
u
r
r
e
n
c
e
s
FIT (%)
20 30 40 50 60 70 80 90 100
0
20
40
60
FIT (%)
S
N
R

3
O
c
c
u
r
r
e
n
c
e
s
20 30 40 50 60 70 80 90 100
0
20
40
60
O
c
c
u
r
r
e
n
c
e
s
FIT (%)
20 30 40 50 60 70 80 90 100
0
20
40
60
FIT (%)
S
N
R

1
0
O
c
c
u
r
r
e
n
c
e
s
20 30 40 50 60 70 80 90 100
0
20
40
60
O
c
c
u
r
r
e
n
c
e
s
FIT (%)
20 30 40 50 60 70 80 90 100
0
20
40
60
FIT (%)
S
N
R

3
0
O
c
c
u
r
r
e
n
c
e
s
20 30 40 50 60 70 80 90 100
0
20
40
60
O
c
c
u
r
r
e
n
c
e
s
FIT (%)
Fig. 5. Histograms for the t indexes - darker: noisy t; lighter: noise-free t.
5 10 15 20 25 30 35 40
0
7.5
15
22.5
30
n
a
Exhaustive Search
S
N
R

1
O
c
c
u
r
r
e
n
c
e
s

(
%
)
5 10 15 20 25 30 35 40
0
7.5
15
22.5
30
n
a
Optimization
5 10 15 20 25 30 35 40
0
7.5
15
22.5
30
n
b
Exhaustive Search
5 10 15 20 25 30 35 40
0
7.5
15
22.5
30
n
b
Optimization
5 10 15 20 25 30 35 40
0
7.5
15
22.5
30
S
N
R

3
O
c
c
u
r
r
e
n
c
e
s

(
%
)
5 10 15 20 25 30 35 40
0
7.5
15
22.5
30
5 10 15 20 25 30 35 40
0
7.5
15
22.5
30
5 10 15 20 25 30 35 40
0
7.5
15
22.5
30
5 10 15 20 25 30 35 40
0
7.5
15
22.5
30
S
N
R

1
0
O
c
c
u
r
r
e
n
c
e
s

(
%
)
5 10 15 20 25 30 35 40
0
7.5
15
22.5
30
5 10 15 20 25 30 35 40
0
7.5
15
22.5
30
5 10 15 20 25 30 35 40
0
7.5
15
22.5
30
5 10 15 20 25 30 35 40
0
7.5
15
22.5
30
S
N
R

3
0
O
c
c
u
r
r
e
n
c
e
s

(
%
)
5 10 15 20 25 30 35 40
0
7.5
15
22.5
30
5 10 15 20 25 30 35 40
0
7.5
15
22.5
30
5 10 15 20 25 30 35 40
0
7.5
15
22.5
30
Fig. 6. Histograms for the orders n = [n
a
, n
b
].
V. CONCLUSIONS
There is a need for high order ARX models that can
reproduce the dynamic behavior of a system with a high
degree of delity. However, as shown in this study, such
high order can not be arbitrarily adopted. Examples 1, 2 and
3 reveal that the use of indiscriminate high order can result
in very poor models. To address the problem of determining
a suited high order for an ARX model, an optimization
strategy was proposed. As previously stated, the best high
order n = [n
a
, n
b
] is the one that generates a model that
provides the maximum t index value. The search for this
best high order is performed varying the parameters n
a
and
n
b
and calculating the t index associated with the ARX
model identied with order [n
a
, n
b
].
The optimization strategy is performed in two-steps: rst, a
global optimization method (Simulated Annealing) is applied
0 5 10 15 20 25 30 35 40
0
10
20
30
40
n
a
n
b
SNR = 1
0 5 10 15 20 25 30 35 40
0
10
20
30
40
n
a
n
b
SNR = 3
0 5 10 15 20 25 30 35 40
0
10
20
30
40
n
a
n
b
SNR = 10
0 5 10 15 20 25 30 35 40
0
10
20
30
40
n
a
n
b
SNR = 30
Fig. 7. Dispersion maps for optimal orders ( marker: exhaustive search, marker: optimization).
to nd a solution close to the global optimum (best t
value); afterwards, this solution is rened, i.e., moved closer
to global optimum, with a direct search method (Pattern
Search). To validate the optimization strategy applied in
the determination of the best high order, a SISO plant was
employed in Monte Carlo simulations under various SNR
scenarios. For each one of them, results were presented and
analysis were drawn. It was shown that the optimization
procedure nds solutions that are very close to the global
optimum, inspecting much fewer cases than in exhaustive
search. A related paper [16] applies the proposed optimiza-
tion procedure to a MIMO plant. Future work involves the
application of the ARX (optimized) high order model and the
proposed approach to the model order reduction problem.
VI. ACKNOWLEDGMENTS
Authors thank the support provided by Center of Excel-
lence for Industrial Automation Technology (CETAI) and
Research Center (CENPES) of Petrobras SA. Author V.A.O.
Alves also thanks the Mau a Institute of Technology - IMT.
REFERENCES
[1] Y. C. Zhu, Multivariable process identication for MPC: the asymp-
totic method and its applications, Journal of Process Control, vol. 8,
n. 2, 1998, pp 101-115.
[2] P. M. J. Van den Hof and R. J. P. Schrama, An indirect method for
transfer function estimation from closed-loop data, Automatica, vol.
29, n. 6, 1993, pp. 1523-1527.
[3] B. Huang and S. L. Shah, Closed-loop identication: a two step
approach. Journal of Process Control, vol. 7, n. 6, 1997, pp. 425-438.
[4] V. A. O. Alves and C. Garcia, Extensions and innovations in two-
step closed-loop system identication, in 9th Portuguese Conference
on Automatic Control - CONTROLO, Coimbra, Portugal, 2010, pp.
362-367.
[5] U. Forssell and L. Ljung, Closed-loop identication revisited, Auto-
matica, vol. 35, n. 7, 1999, pp. 1215-1241.
[6] R. B. Gopaluni, R. S. Patwardhan and S. L. Shah, The nature of data
pre-lters in MPC relevant identication - open and closed-loop issues,
Automatica, vol. 39, 2003, pp.1617-1626.
[7] R. B. Gopaluni, R. S. Patwardhan and S. L. Shah, MPC relevant
identication - tuning the noise model, Journal of Process Control,
vol. 14, n. 6, 2004, pp. 699-714.
[8] A. S. Potts, R. A. Romano and C. Garcia, Enhancement in perfor-
mance and stability of MRI methods, in 16th IFAC Symposium on
System Identication, Brussels, Belgium, 2012 - accepted paper.
[9] Y. C. Zhu, Multivariable System Identication for Process Control,
Pergamon Press, Amsterdam; 2001, 349 p.
[10] A. S. Badwe, S. C. Patwardhan and R. D. Gudi, Closed-loop identica-
tion using direct approach and high order ARX/GOBF-ARX models,
Journal of Process Control, vol. 21, n. 7, 2011, pp. 1056-1071.
[11] V. A. O. Alves, Comparison between Direct and Two-Step Closed-
Loop Identication Methods, PhD Thesis, Polytechnic School of the
University of S ao Paulo, SP, Brazil, 2011, 216 p. /in Portuguese/.
[12] L. Ljung, System Identication: Theory for the user, Prentice Hall
PTR, Upper Saddle River, NJ; 1999, 609 p.
[13] L. Jung, The system identication toolbox: The manual, 7th edition,
The MathWorks, Inc., MA, USA: Natick, 2007.
[14] D. W. Clarke, C. Mohtadi and P. S. Tuffs, Generalized predictive
control - parts 1 and 2, Automatica, vol. 23, n. 2, 1987, pp. 137-160.
[15] H. J. A. F. Tulleken, Generalized binary noise test-signal concept for
improved identication-experiment design, Automatica, v. 26, n. 1,
1990, pp. 1523-1527.
[16] V. A. O. Alves, R. Juliani C. G. and C. Garcia, Searching the Optimal
Order for High Order Models - MIMO Case, IEEE Multi-Conference
on Systems and Control, Dubrovnik, Croatia, 2012 - accepted paper.
[17] S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi, Optimization by
Simulated Annealing, Science, vol. 220 n. 4598, 1983, pp. 671-680.
[18] V. Cerny, Thermodynamical approach to the traveling salesman prob-
lem: An efcient simulation algorithm, Journal of Optimization Theory
and Applications, vol. 45, 1985, pp. 41-51.
[19] R. Hooke and T. A. Jeeves, Direct search solution of numerical
and statistical problems, Journal of the Association for Computing
Machinery (ACM), vol. 8, n. 2, 1961, pp. 212-229.
[20] W. C. Davidon, Variable metric method for minimization, SIAM
Journal on Optimization, vol. 1, n. 1, 1991, pp. 1-17.

Potrebbero piacerti anche