Sei sulla pagina 1di 6

Simpl_eTS: A Simplified Method for Learning

Evolving Takagi-Sugeno Fuzzy Models


Plamen Angelov
1
, Dimitar Filev
2
Abstract This paper deals with a simplified version
of the Evolving Takagi-Sugeno (eTS) learning
algorithm a computationally efficient procedure for
on-line learning TS type fuzzy models. It combines
the concept of the scatter as a measure of data
density and summarization ability of the TS rules, the
use of Cauchy type antecedent membership
functions, an aging indicator characterizing the
stationarity of the rules, and a recursive least square
algorithm to dynamically learn the structure and
parameters of the eTS model.
I. INTRODUCTION
Takagi-Sugeno (TS) fuzzy models are a well
established tool for dealing with complex systems
experiencing multiple operating modes. Most of the
research in the area of learning TS models has been
focused on off-line identification methods. Among
those are methods based on back-propagation and multi-
layer perceptron [7], learning vector quantization and
radial-basis functions [10], genetic algorithms [1,12],
combined clustering the input data space and consequent
parameters estimation [7, 15]. Recently, the problem of
on-line learning TS models has also been addressed in
conjunction with model identification [2, 9], control [1,
8], fault detection [11], and signal processing [9]. The
problem of structure (adaptation of the number of rules)
and parameter (adaptation of antecedent and consequent
model parameters) identification have been introduced in
the framework of evolving connectionists models [9] and
evolving TS (eTS) models [1, 2], and further extended to
the MIMO case in [4]. The eTS model is a TS fuzzy
model whose rule-base and parameters continually
evolve by adding new rules with more summarization
power, and by modifying existing rules and parameters.
The eTS learning algorithm [4] is based on a recursive
evaluation of the information potential of the new data
points and the focal points of the rules. The algorithm
continuously evaluates the relationship between those
potentials and dynamically updates the number of rules
and their antecedent parameters. This process is
combined with recursive updating of the consequent
parameters.
1
Department of Communication Systems, Lancaster Univeristy,
Lancaster, LA1 4WA, UK; e-mail: p.angelov@lancaster.ac.uk ;
2
Ford Motor Co., Glendale Ave 24500, Detroit, USA; e-mail:
dfile@ford.com
In this paper we propose a simplified version of the
eTS learning algorithm, called the simpl_eTS.
Complexity of the original eTS learning method [2] is
significantly reduced by replacing the notion of the
information potential with the concept of the scatter that
provides a similar but computationally effective
characteristic of the density of input/output data. We
further replace the Gaussian exponential membership
functions of the rule antecedents with a Cauchy type
function. We also introduce of the concept of
rules/clusters age determined by the relative time of
appearance of the data samples that belong to a
particular rule/cluster to improve the ability of the
algorithm to deal with non-stationary models. We also
present results of testing the simpl_eTS algorithm with
Box-Jenkins gas-furnace data [6].
II. SIMPLIFIED EVOLVING TS FUZZY MODEL
The simpl_eTS algorithm deals with TS models [13]:
9
i
: IF (x
1
is
i
1

) AND AND (x
n
is
i
n

)
THEN (y
i
=
i T
e
x t ) i=1,2,...,R (1)
where 9
i
denotes the i
th
fuzzy rule; R is the number of
fuzzy rules;
e
x is the extended input
vector,
T T
e
x x ] , 1 [ = ,which is formed by appending the
input vector
T
n
x x x x ] ,..., , [
2 1
= with 1 (allowing a free
parameter);
i
j

denotes the antecedent fuzzy sets,


j=1,2,...,n; y
i
denotes the multidimensional vector output
of the i
th
linear sub-system, ] ,..., , [
2 1
i
m
i i i
y y y y =
(
(
(
(
(

=
i
nm
i
n
i
n
i
m
i i
i
m
i i
i
a a a
a a a
a a a
...
... ... ... ...
...
...
2 1
1 12 11
0 02 01
t
are its parameters.
At the heart of the TS method for fuzzy modeling is the
segmentation of the data space into fuzzily defined
regions. The fuzzy regions are parameterized and each
region is associated with a linear sub-system. As a result,
the nonlinear system forms a collection of loosely
(fuzzily) coupled (blended) multiple linear models. The
degree of firing of each rule is proportional to the level
0-7803-9158-6/05/$20.00 2005 IEEE. The 2005 IEEE International Conference on Fuzzy Systems 1068
of contribution of the corresponding linear model to the
overall output of the TS model.
Instead of the conventionally used Gaussian
membership functions we consider the so called Cauchy
type membership functions to represent rule antecedents
[
= |
|
|
.
|

\
|
|
|
.
|

\
|

+
=
n
1 j
2
* i
j j
i
r
) x x ( 2
1
1
) x (
;i=1,2,..,R (2)
where r is a positive constant, which defines the zone of
influence of the i
th
fuzzy rule antecedent ;
* i
j
x
is the j
th
coordinate of the focal point of the i
th
rule.
The radius r is one of the few parameters of the
algorithm, which needs to be pre-defined. Its value is an
important leverage for a trade-off between the model
complexity and precision. As a general guideline,
excessively large values of r lead to averaging,
excessively small - to over-fitting. Our preference for
the Cauchy type function, which is a first order Taylor
series approximation of the Gaussian function, is based
on its simpler calculation, especially in real time
implementations.
The TS model output is calculated by weighted
averaging of individual rules' contributions:
_ _
= =
t = =
R
1 i
i T
e
i
R
1 i
i i
x ) x ( y ) x ( y (3)
where _
=
=
R
j
j i i
x x x
1
) ( / ) ( ) (
is the
normalized firing level of the i
th
rule;
The output can be represented in a vector form as:
u
T
y = (4)
where | |
T
T R T T
) ( ,..., ) ( , ) (
2 1
t t t u = is a vector
composed of the linear model parameters;
T T
e
R T
e
T
e
x x x ] ,..., , [
2 1
= is a vector of the inputs that
are weighted by the normalized firing levels of the rules.
As we mentioned above, the procedure for clustering
the input/output data into fuzzy regions, each of which is
represented by a linear model, is the kernel of the eTS
learning algorithm. The on-line clustering procedure of
the simpl_eTS models is based on the notion of the
scatter of the data points rather than the potential. We
define global scatter as the average distance from a data
sample to all other data samples:
( )
__
=
+
=

+
=
N
l
m n
j
j j
G
k
k z l z
m n N
k z S
1 1
2
) ( ) (
) (
1
)) ( ( (5)
where ( ) ) (k z S
G
k
denotes the global scatter at the data
sample
T
k y k x k z )] ( ), ( [ ) ( = ;
m n
R k z
+
e ) ( at time k=2,3,,N.
The scatter is as intuitive as the potential, but is
computationally more efficient. Note that the division is
over integer numbers (N, n and m), which can be
realized effectively on hardware using shifting registers,
while the expression for the potential involves a division
by a real number.
The range of possible values of the scatter measured at
certain sample is obviously [0;1] with 0 meaning all of
the data samples coincide (which is extremely
improbable) and 1 meaning that all of the data points are
on the vertices of the hypercube formed as a result of the
normalization of the data.
In a similar way as the potential is used in subtractive
clustering approach off-line [7, 15] and on-line evolving
clustering [2] the scatter can be used to select the focal
points of fuzzy rules/centers of clusters when the scatter
is minimum.
We call population the number of data samples which
are assigned to a cluster/rule. We denote the population
of the i
th
cluster rule/cluster at the time instant k
by ) (k N
i
. Once the focal points of the rules are
determined each data sample is assigned to the nearest
of the existing focal points unless it forms a new
rule/cluster:
2
*
1
) ( ) ( min arg ; 1 ) ( ) ( k x k x i k N k N
i
R
i
i i
= + =
=
(6)
In the latter case the population of the newly formed rule
is set to 1: 1 ) ( = k N
R
. It should be noted that each data
sample is assigned to one cluster/rule only and thus the
following equation holds:
_
=
=
R
i
l
k k N
1
) ( (7)
We introduce the notion of the local scatter as the
average distance from a data sample to the data samples
that belong to the same i
th
cluster, where i=1,2,,R:
( )
__
=
+
=

+
=
) (
1 1
2
) ( ) (
) )( (
1
)) ( (
k N
l
m n
j
j j l
L
k
i
k z l z
m n k N
k z S
(8)
where )) ( ( k z S
L
k
denotes the local scatter at the data
sample z(k) calculated at time k=2,3,,N.
III. ON-LINE LEARNING OF THE SIMPL_E TS MODEL
In on-line mode, the training data are collected
continuously, rather than being a fixed set. On-line
learning of the simpl_eTS models includes recursive
clustering under assumption of a gradual change of the
rule-base and RLS method [5]. The RLS method for
estimation of the consequents parameters can be used,
because once the antecedent parameters are determined
and fixed, the model is linear in parameters [2].
An important step in the practical implementation of
the algorithm is the on-line standardization of the
input/output data. While this task is an ordinary step in a
batch type learning process (i.e. subtraction of the mean
and normalization over the standard deviation), its
The 2005 IEEE International Conference on Fuzzy Systems 1069
practical realization in on-line environment requires
some additional efforts.
In order to standardize the data we recursively
estimate the mean ) (k z
j
and the variance ) (
2
k
j
o for each
element z
j
of the input/output vector z
) (
1
) 1 (
1
) ( k z
k
k z
k
k
k z
j j j
+

= (11)
( )
2
2 2
) ( ) (
1
1
) 1 (
1
) ( k z k z
k
k
k
k
k
j j j j

= o o (12)
with initial values 0 ) 1 ( =
j
z and 0 ) 1 (
2
=
j
o .
Then the standardized input/output value is:
( ) ) ( / ) ( ) ( ) ( k k z k z k z
j j j st
o = (13)
Since the standard deviation is zero for k=1, the
recursive normalization process starts at the second step,
therefore we miss the first observation if we apply this
standardization procedure and the first standardized
input/output data is calculated from the second actual
sample. In order to simplify the notations we assume in
the next steps of the actual simpl_eTS that the
input/output vector z has been already standardized.
The actual learning algorithm starts (stage 1) with the
first standardized data sample that establishes the focal
point of the first cluster (i=1). Its coordinates are used to
form the antecedent part of the fuzzy rule (1) using for
example the Cauchy membership functions (2). Its
scatter, S is assumed equal to 0.
0 : ) ( ); ( : ; 1 : ; 1 :
* 1
1
* 1
= = = = z S k x x R k (14)
where
* 1
z is the first cluster center;
* 1
x is focal point of
the first rule being a projection of
* 1
z on the axis x.
At this stage we also initialize the parameters of the
RLS algorithm, which is used for on-line estimation of
the consequent parameters of the simp_eTS model:
I C O = = = ) 1 ( ; 0 ) 1 ( ) 1 ( t u (15)
Start a loop, which continues until there are data (for
k>1. Read the next input data sample (x(k+1)) at stage 2.
Estimate the output at the next time step (stage 3) by:
) ( ) 1 ( ) 1 (
^ ^
k k k y
T
u + = + (16)
At the next time step (k:=k+1) we can read the true
value of the output of the new data point, y(k),stage 4.
At stage 5 the scatter of the new data points (z(k)) is
calculated recursively. Note that the summation is over
(k-1). That means all data samples up to the previous
one:
( ) ( )
__

=
+
=

+
=
1
1 1
2
) ( ) (
) )( 1 (
1
) (
k
l
m n
j
l j
G
k
k z l z
m n k
k z S (17)
where )) ( ( k z S
G
k
denotes the global scatter at a data
sample z(k) calculated at time instant k=2,3,.
The expression for the local scatter is similar. Further
we will consider the global scatter only without loss of
generality. Instead of using (17) we introduce its
recursive version. To arrive at this expression we make
the following simple transformation:
__

=
+
=

+
=
1
1 1
2
)) ( ) ( (
) )( 1 (
1
)) ( (
k
l
m n
j
j j
G
k
k z l z
m n k
k z S (18)
The scatter at the moment of time k can be expressed
recursively as:
( )
|
|
.
|

\
|
+
+
=
_ _
+
=
+
=
m n
j
j j
m n
j
j k
k k k z k z k
m n k
k z S
1 1
2
) ( ) ( ) ( 2 ) ( ) 1 (
) )( 1 (
1
) ( |
(19)
where
__

=
+
=
=
1
1 1
2
) ( ) (
k
l
m n
j
j
l z k
;
_

=
=
1
1
) ( ) (
k
l
j j
l z k |
__

=
+
=
=
1
1 1
2
) ( ) (
k
l
m n
j
j
l z k
_

=
=
1
1
) ( ) (
k
l
j j
l z k |
It should also be noted that the division is by integer
numbers (k, n, and m are integers), while the respective
recursive formula in the original eTS algorithm involves
division by a real number. Parameters ) (k
j
| and
) (k are recursively updated as follows:
_
+
=
+ =
m n
j
j
k z k k
1
2
) 1 ( ) 1 ( ) ( ; ) 1 ( ) 1 ( ) ( + = k z k k
j j j
| | (20)
After the new data are available in on-line mode, they
influence the scatter at the centers of the clusters (
* i
z ,
i=1,2,...,R), which are respective to the focal points of
the existing rules (
* i
x , i=1,2,...,R). The reason is that by
definition the global scatter depends on the distance to
all data points, including the new ones.
At stage 6 the scatter at the focal points of the existing
clusters are updated recursively:
In order to find the recursive expression for the update
of the scatter at the focal point, z
i
* we can compare the
expressions for the scatter at the focal point calculated at
two consecutive steps (k-1) and k. The scatter calculated
at time steps k and k-1 at this centre will depend on the
averaged accumulated sum of distances between this
centre/focal point and all previous points (from 1 to k-1
and k-2 respectively) according to the definition of the
scatter given in (16):
( )
__

=
+
=


+
=
2
1 1
2 *
1
) 1 ( ) (
) )( 2 (
1
) (
k
l
m n
j
j j
i
k
k z l z
m n k
z S
(21)
__

=
+
=

+
=
1
1 1
2 *
)) ( ) ( (
) )( 1 (
1
) (
k
i
m n
j
j j
i
k
k z l z
m n k
z S (22)
From (21) we have:
) ( ) )( 2 ( )) 1 ( ) ( (
*
1
2
1 1
2 i
k
k
l
m n
j
j j
z S m n k k z l z

=
+
=
+ =
__
(21a)
From (22) we have:
The 2005 IEEE International Conference on Fuzzy Systems 1070
) )( 1 (
)) ( ) 1 ( (
) )( 1 (
)) 1 ( ) ( (
) (
1
2
2
1 1
2
*
m n k
k z k z
m n k
k z l z
z S
m n
j
j j
k
l
m n
j
j j
i
k
+

+
+

=
_ __
+
=

=
+
=
(22a)
Substituting (21a) into (22a) we have the following
recursive expression:
|
|
.
|

\
|
+ +
+
=
_
+
=

m n
j
j j
i
k i k
k z k z z S m n k
m n k
z S
1
2 *
1
*
)) 1 ( ) ( ( ) ( ) )( 2 (
) )( 1 (
1
) (
(23)
This can be re-written as:
_
+
=

=
m n
j
j j
i
k
i
k
k z k z z S
k
k
z S
1
2 *
1
*
)) 1 ( ) ( ( ) (
1
2
) (
(23a)
where ) (
* i
k
z S is the scatter at the i
th
center
* i
z , which is
a prototype of the i
th
rule at time k.
It should be noted that in this recursive update of the
scatter at focal points there is practically no division
except (k-2)/(k-1) factor, which leads to computational
simplifications comparing with the original eTS
algorithm.
At stage 7 the scatter of the new data sample is
compared to the updated scatter of the centers of the
existing clusters. The decision whether to modify or up-
grade the rule-base is taken based on the following rule:
IF the scatter at the new data sample, z(k) is lower than
the scatter at all of the existing rule centers or higher
than the scatter at all of the existing centers [3]:
) ( max )) ( ( ... )... ( min )) ( (
*
1
*
1
i
k
R
i
k
i
k
R
i
k
z S k z S OR z S k z S
= =
> <
(24)
AND z(k) is close to an existing rule center given by:
r k 5 . 0 ) (
min
< o (25)
where
2
*
1
min
) ( min ) (
i
R
i
x k x k =
=
o
denotes the distance from
the new data point to the closest of the existing rule
centers (let us suppose that it has index l).
THEN the new data point, z(k) replaces this center:
) ( :
*
k x x
l
= ; )) ( ( : ) (
*
k z S z S
k
l
k
=
ASI
l
(k):=ASI
l
(k)+k ;N
l
(k) =N
l
(k)+1 (26)
ELSEIF only (22) is satisfied but not (23)
THEN the new data sample is added to the rule-base as
a new center and a new rule is formed with a focal
point based on the projection of this center on the axis x:
R :=R+1;
) (
*
k x x
R
=
; )) ( ( ) (
*
k z S z S
k
R
k
=
ASI
R
(k):=k; N
R
(k):=1 (27)
ELSEIF ((23) is NOT satisfied)
THEN the data sample is assigned to the nearest existing
cluster centre/focal point of a rule:
2
*
1
) ( min arg ; 1 ) ( ) ( ; ) ( ) (
i
R
i
i i i i
x k x i k N k N k k ASI k ASI = + = + =
=
(28)
END
It should be noted that we calculate the distance,
min
o over the input data only by differ from [2] where it
has been calculated as an overall measure over both the
antecedents and consequents. As a result we disallow
rules which have similar antecedents to co-exist. Such
rules could have entered the rule base according to the
definition of
min
o used in [2] if their consequents are
different. Such rules, however, lead to contradiction in
their linguistic interpretation and we adopted the
stronger formulation
2
*
1
min
) ( min ) (
i
R
i
x k x k =
=
o
In such a
way, we have the same effect as if we remove at a later
stage a rule based on its linguistic similarity to a
previously accepted rule as it is done in [14].
Instead of linguistic-based simplification of the rule-
base we perform population-based one. Because the
rule base evolve incrementally, at the moment of
appearance of a rule it is judged by the existing at that
moment of time data samples, but not the future ones.
The data pattern can change and therefore, rules ones
accepted in the rule-base can go out of favor or be less
and less relevant in the future. We monitor the
population of each cluster and if it amounts to less than
1% of the total data samples at that moment the
cluster/rule is ignored from the rule base by setting its
firing level to 0:
IF (N(l)/N <0.01) THEN (
i
:=0) (29)
This population-based rule-base reduction means that
the rules will be representative.
In simpl_eTS the rule-base gradually evolves [2].
Therefore the normalized firing strengths of the rules (
i
)
change, which affects all the data (including the data
collected before time of the change). Therefore, the
straightforward application of the RLS is not correct [5].
A resetting of the covariance matrices and parameters
initialization of the RLS is made at each time a new
(R+1)
th
rule is added to the rule base estimating them as
a weighted average of the respective covariance and
parameters of the remaining R rules [2].
In simpl_eTS we assume a globally optimal objective
function:
( )
_

=
=
1
1
2
) ( ) ( ) ( ) (
k
l
T
G
l l l y k J u
(30)
A local optimization which leads to a more interpretable
locally sub-models is also possible as in [2] for the
original eTS. For the sake of simplicity, however, we
consider the global optimality here without loss of
generality. It leads to better overall performance of the
models, though the local interpretability can suffer [2]. It
can be minimized by the following RLS procedure
(stage 8) [5]:
|
.
|

\
|
+ = ) 1 ( ) ( ) ( ) ( ) ( ) 1 ( ) (
^ ^ ^
k k k y k k C k k
T
u u u
(31)
The 2005 IEEE International Conference on Fuzzy Systems 1071
) ( ) 1 ( ) ( 1
) 1 ( ) ( ) ( ) 1 (
) 1 ( ) (
k k C k
k C k k k C
k C k C
T
T


+

=
(32)
where
0 ) ( ,..., ) ( , ) ( ) 1 (
^ ^
2
^
1
^
=
(

=
T
T R T T
t t t u
; I C O = ) 1 ( k=2,3,
When a new rule is added to the rule-base, the RLS is
reset as shown in [5].
The recursive procedure for on-line learning of the
simpl_e TS models can be represented by the following
pseudo-code:
Begin simpl_e TS
/*stage 1*/
Initialize the rule-base:
Read z(1);
Set S(1)=0; z
1
*:=z(1);
R:=1; K:=1;
DO for k>1
Read x(k+1); /*stage 2*/
/*stage 3*/
Estimate the output: ) 1 (
^
+ k y by(16);
k:=k+1; /*stage 4*/
Read y(k);
/*stage 5*/
Calculate S
k
recursively by (19)-(20)
/*stage 6*/
Update S(z
i
*)at each centre by (23a);
Compare S(k) with S
k
(z
i
*);/*stage 7*/
IF (24) AND (25) holds THEN
/*replace a rule/cluster (26)*/
x
i
*:=x(k); S
k
(z
i
*):=S
k
(z(k));
ASI
i
(k):=ASI
i
(k)+k; N
i
(k):=N
i
(k)+1
ELSEIF (24) holds THEN
/*add new rule/cluster (23)*/
R:=R+1; x
R
*:=x(k); ASI
i
(k):=k;
N
R
(k):=1; S
k
(z
R
*):= S
k
(z(k));
END
/*assign the data sample to the nearest cluster */
END DO
IF (29) THEN Reduce the rule base;
/*stage 8*/
Estimate the parameters of local sub-
models by RLS (31)-(32);
End (simpl_eTS)
IV. EXPERIMENTAL RESULTS
The proposed approach has been tested on a benchmark
problem - Box-Jenkins gas furnace data.
The Box-Jenkins data set is one of the well established
benchmark problems. It consists of 290 pairs of
input/output data taken from a laboratory furnace [6].
Each data sample consists of the methane flow rate, the
process input variable, u(k), and the CO
2
concentration
in off gas, the process output, y(k). From different
studies the best model structure for this system is:
( ) ) 4 ( ), 1 ( ) ( = k u k y f k y (35)
The experiments include: i) applying eTS models to all
(290) data samples; note, that the eTS model evolves all
the time; ii) applying simpl_eTS1 (using scatter instead
of potential) to 200 data samples, then fix the model and
validate it with the remaining 90 samples; iii) do the
same but using Cauchy function instead of Gaussian one
(simpl_eTS2); iv) do the same plus population-based
reduction of the rule-base.
To evaluate the performance of the models we use the
RMSE and the NDEI (Non-Dimensional Error Index),
and the variance accounted for (VAF). NDEI is the ratio
of the root mean square error over the standard deviation
of the target data:
( ) ) (t y std
RMSE
NDEI =
. The VAF represent
the ratio between the variance of the real data and the
model output:
( )
(

|
.
|

\
|
= y y y VAF
r
var / var 1 % 100
^
. is
100% when the signals
^
y
and y are identical.
The values of the performance measures were calculated
separately for the training and testing data. The value of the
cluster radii was r=0.4, and initialization parameter for
the RLS algorithm 750 = O . These are the only
parameters of the algorithm that need to be pre-specified.
The results are shown in Figs. 1-4 and Table 1.
Table 1 Results for Box-Jenkins Fata Set
algorithm r R RMSE
Testing
NDEI
Testing
VAF
eTS 0.4 5 0.04904 0.30571 92.134
simpl_eTS1 0.4 3 0.04850 0.30232 92.225
simpl_eTS2 0.4 3 0.04849 0.30041 92.315
simple_TS3 0.4 3 0.04849 0.30041 92.315
Fig.1 Box-Jenkins data (dots); simpl_eTS model (solid line);
data samples that originate new rule (circles); data samples that
replace existing centers (triangles); final position of the focal
point (asterisk)
The 2005 IEEE International Conference on Fuzzy Systems 1072
The results illustrate that practically the same or slightly
higher performance can be achieved with smaller
number of rules (3 instead of 5). This is directly linked
to the interpretability and simplicity. In addition, the
simpl_eTS is computationally simpler and more
convenient for practical realization on dedicated
hardware.
Fig.2 Scatter evolution in the simpl_eTS
Fig.3 Evolution of the population of rules/clusters
Fig.4 Evolution of the age of rules/clusters
V. CONCLUDING REMARKS
We demonstrated the advantages of the eTS algorithm as
a tool for on-line learning of TS fuzzy model can be
preserved while the actual learning algorithm can be
significantly simplified. The new concepts introduced in
this paper Cauchy type membership function instead of
Gaussian, scatter as a measure of density and ability for
summarization instead of potential, rule age as a
representation of the stationarity of the rules, etc., didnt
negatively affect the performance of the algorithm but
contributed substantially towards its simplification.
REFERENCES
[1] Angelov P. P. (2002) Evolving Rule-based Models: A Tool for
Design of Flexible Adaptive Systems, Springer-Verlag,
Heidelberg, New York, ISBN 3-7908-1457-1.
[2] Angelov, P., D. Filev, "An approach to on-line identification of
evolving Takagi-Sugeno models", IEEE Trans. on Systems, Man
and Cybernetics, part B, vol.34, No.1, 2004, 484-498
[3] Angelov P., J. Victor, A. Dourado, D. Filev, On-line evolution of
Takagi-Sugeno Fuzzy Models, 2
nd
IFAC Workshop on
Advanced Fuzzy/Neural Control, 16-17 September 2004, Oulu,
Finland, pp.67-72.
[4] Angelov P., C. Xydeas, D. Filev, On-line Identification of MIMO
Evolving Takagi-Sugeno Fuzzy Models, Proc. IJCNN-FUZZ-
IEEE, Budapest, Hungary, July 2004, ISBN 0-7803-8354-0
[5] Astrom K., B. Wittenmark (1984) Computer Controlled Systems:
Theory and Design, Prentice Hall, NJ USA.
[6] Box G., G, Jenkins (1976) Time Series Analysis: Forecasting and
control, 2
nd
edition, Holden-Day, San Francisco, CA, USA.
[7] Chiu, S. L., "Fuzzy model identification based on cluster
estimation", J. of Intell. and Fuzzy Syst.vol.2, 267-278, 1994.
[8] Filev D.P., T. Larsson, L. Ma, Intelligent Control, for automotive
manufacturing-rule-based guided adaptation, in Proc. IEEE
Conf. IECON00, Nagoya, Japan, Oct. 2000, pp.283-288.
[9] Kasabov N., Q. Song (2002) DENFIS: Dynamic Evolving Neural-
Fuzzy Inference System and Its Application for Time-Series
Prediction, IEEE Trans. on Fuzzy Systems 10(2) 144-154.
[10] Lin, F.-J., C.-H. Lin, P.-H. Shen, Self-constructing fuzzy neural
network speed controller for permanent-magnet synchronous
motor drive, IEEE Trans. on Fuzzy Systems 9 (5) 751-759, 2001.
[11] Lughofer E., E.-P. Klement, J.M. Lujan, C. Guardiola, Model-
Based Fault Detection in Multi-Sensor Measurement Systems,
IEEE Symp. on Intelligent Systems, Varna, Bulgaria, June 2004.
[12] Setnes M., J.A. Roubos (1999) Transparent Fuzzy Modeling using
Clustering and GA's, Proc. of the NAFIPS Conference, New
York, USA, pp.198-202.
[14] Shing J., R. Jang (1993) ANFIS: Adaptive Network-based Fuzzy
Inference Systems, IEEE Transactions on Systems, Man &
Cybernetics, v.23 (3), pp.665-685.
[13] Takagi, T., M. Sugeno, "Fuzzy identification of systems and its
application to modeling and control", IEEE Trans. on Syst., Man
& Cybernetics, vol. 15, pp. 116-132, 1985.
[14] Victor Ramos J., A. Dourado (2004) On-Line Interpretability by
Fuzzy Rule-base simplification and Reduction, EUNITE 2004,
Aachen, Germany, pp.430-442.
[15] Yager R.R., D.P. Filev (1993) Learning of Fuzzy Rules by
Mountain Clustering, Proc. of SPIE Conf. on Application of
Fuzzy Logic Technology, Boston, MA, USA, pp.246-254.
The 2005 IEEE International Conference on Fuzzy Systems 1073

Potrebbero piacerti anche