Sei sulla pagina 1di 12

This article has been accepted for publication in a future issue of this journal, but has not been

fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIA.2015.2468191, IEEE Transactions on Industry Applications

Hardware/Software Implementation of Fuzzy-Neural-


Network Self-Learning Control Methods for
Brushless DC Motor Drives
Ahmed Rubaai, Fellow, IEEE Paul Young, Student member, IEEE
Howard University
Electrical and Computer Engineering Department
Washington, DC 20059

Abstract— This paper presents a study of different Fuzzy Neural research and industry communities [1]. Several successful
Network (FNN) learning control methods for Brushless DC Motor applications of FL control to improve the popular PID
(BLDC) Drives. The FNN combines fuzzy logic (FL) with the controller are evident in the research community today to
learning capabilities of an Artificial Neural Network. The study build advanced and practical control systems [2-5]. Fuzzy
designs a FNN structure and defines four different training
logic control is robust and non-combinatorial, but may lead to
algorithms for the FNN: Backpropagation (BP), Extended
Kalman Filter (EKF), Genetic (GEN), and Particle Swarm poorly rough granulation and non-adaptation. Nonetheless, FL
Optimization (PSO). These algorithms are examined in the simple control has various drawbacks. The difficulties descend from
application of pattern matching an input set to an output set and the extraction and adjustment of fuzzy rules from input/output
determine the strengths and weaknesses of each algorithm. Tests data without the provision of on-line adaptive and learning
of each learning algorithm by a pattern matching benchmark are methods. In order to resolve these difficulties, numerous
achieved via dSPACE DSP MATLAB/Simulink environment and advanced FL control schemes are introduced, for enhanced
allows for the capability for adaptive self-tuning of the weights PID control of motor drives, including hybrid-fuzzy, genetic-
and memberships of the input parameters. Thus, adds a self- fuzzy and fuzzy-neural-network control strategies [6-11]. With
learning capability to the initial fuzzy design for operational
reinforcement adaptive/learning mechanism, a FL self-learning
adaptively, and implements the solution on real hardware using a
BLDC motor drive system. The success of the adaptive FNN controller can efficiently learn the control law for complex
controlled BLDC motor drive system is verified by experimental nonlinear ac machine drives. Treating the controller as a
results. Testing results show that the EKF method is the superior process and deriving appropriate derivatives allows for
method of the four for this specific application. The BP method adaptive update to the FL components. The results are very
was also somewhat successful, nearly matching the pattern but satisfactory and displayed a high dynamic performance. The
not to the accuracy of the EKF. The GEN method and PSO combination of FL controller with the structure of neural
methods did not demonstrate success. Demonstrating the network makes use of the advantages of both the learning
proposed self-learning FNN control on real hardware realizes the capability of neural network and the robustness of fuzzy logic
solution.
control that ought to compensate the disadvantages of
Index Terms- Learning methods, FNN, real-time control, modification of fuzzy control rules or models. Additionally,
MATLAB/Simulink, industrial drives FNN-based control is found to be a powerful tool for the
solution of nonlinear control problems, and is employed to
I. INTRODUCTION generate and tune fuzzy rules for AC motor drives [12].
The control problem is to actuate system behavior Most of the methods developed so far have some
according to a desired command. Automated control systems drawbacks to generalization and formulation, so the
solve the control problem with minimal human intervention. parralization and hybridization are beneficial for these
The PID controller is a common industrial control strategy for approaches to improve the learning capability and operation of
electrical feedback systems. Two problems exist with the high performance motor drives. The Fuzzy Neural Network
typical PID control response: (1) a tendency to initially (FNN) has acceptance today for application in adaptive control
overshoot step reference changes; and (2) a tendency to of motor drive systems. The ability of the FNN to train itself to
oscillate towards steady state. Separately, another typical create a control law or a parameter estimation profile is the
problem with many control solutions is a lack of adaptively of promise of the future, so the training algorithms for FNN are
the controller to system changes over time. Additionally, many crucially important to successful application of the FNN. The
controls algorithms are interesting in theory but lack a objective of this study is to examine four different learning
practical implementation example. Fuzzy logic (FL) control algorithms: (1) Backpropagation (BP); (2) Extended Kalman
has emerged as one of the most effective nonlinear control Filter (EKF); (3) Genetic (GEN); and (4) Particle Swarm
methods, includes analytical stability analysis for bounded Optimization (PSO) in the simple application of pattern
inputs, important for fuzzy systems to gain acceptance among matching an input set to an output set and determine the

0093-9994 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIA.2015.2468191, IEEE Transactions on Industry Applications

strengths and weaknesses of each method. Tests of each matching of the membership sets of each input. The output
learning algorithm by a pattern matching benchmark occur in layer performs defuzzification by forming a single signal from
MATLAM/Simulink environment. The test results allow for the rules.
comparison of each algorithm performance. The FNN The FNN consists of a 2-input 1-output structure with
structure is introduced to allow for training algorithms, three memberships per input, thus called a 2x3x1 FNN. The
particularly the incorporation of the most promising method of membership layer consists of Gaussian membership functions
the four as a learning algorithm for industrial drives. A parameterized by a mean and sigma each, and the output layer
dSPACE DSP board provides controller I/O interfaces and consists of a parameterized weighted summation. Refer to
hosts the control algorithm developed in MATLAB/Simulink Fig.2 for a diagram of the FNN structure.
and auto-compiled into an embedded control process. To the
best knowledge of the authors, the FNN membership
parameter vector update via PSO and GEN is not seen in
literature.

II. SELF LEARNING CONTROL TECHNIQUES-


BASED FUZZY NEURAL NETWORK

Fuzzy Logic (FL) is an important area of research for


control systems. FL makes rules based decisions regarding
degrees of membership of input parameters. The Fuzzy Neural
Network (FNN) is an implementation of FL with trainable
parameters similar to an Artificial Neural Network (ANN).
The techniques for training FNN structures can differ and
many methods exist, primarily the gradient decent
Backpropagation (BP) and Extended Kalman Filter (EKF) Fig. 2 2x3x1FNN
methods. Adaptive training algorithms are gaining research
acceptance, particularly Genetic (GEN) and Particle Swarm The input layer consists of a capture of the two inputs.
Optimization (PSO). Different algorithms are possible for 𝑦𝑖1 = 𝑢1𝑖 (1)
FNN training process. This section demonstrates a practical where 𝑖 = 1,2 and 𝑦𝑖1 is the output of each input node.
FNN structure with four leaning methods and tests The membership layer consists of a Gaussian membership of
performance. Fig. 1 displays a FNN learning scheme. the input layer outputs.
(𝑦𝑖1 −𝑚𝑗 )2
𝑦𝑗2 = exp − (2)
(𝜎𝑗 )2
where 𝑗 = 1: 6 , 𝑖 = 1𝑖𝑓1 ≤ 𝑗 ≤ 3, 𝑖 = 2𝑖𝑓4 ≤ 𝑗 ≤ 6 , 𝑚𝑗 and
𝜎𝑗 are parameters, and 𝑦𝑗2 is the output of each membership
node.
The rule layer consists of a table of products of the
membership layer outputs
𝑦𝑘3 = (𝑦𝑙2 )(𝑦𝑚
2
) (3)

where 𝑘 = 1: 9,𝑦𝑘3 is the output of each rule node.

The output layer is a weighted summation of the rule layer


Fig. 1 FNN Training
outputs
9
A. Fuzzy Neural Network
𝑦 = 𝑦𝑜4 = ∑(𝑤𝑘 ) (𝑦𝑘3 ) (4)
The first step is to specifically design a FNN, which
𝑘=1
can have various forms beyond how this work defines one. In where 𝑜 = 1, 𝑦𝑜4 is the output of the output node, and 𝑤𝑘 are
this case, the FNN has four layers: (1) the input layer; (2) the the weights of the output layer. Note, that between the 6 means
membership layer; (3) the rule layer; and (4) the output layer. and sigmas of the membership layer (𝑚𝑗 and 𝜎𝑗 , 𝑗 = 1: 6) and
This paper only uses a FNN form with two inputs and one
the 9 weights of the output layer (𝑤𝑘 , 𝑘 = 1: 9) there are 21
output. The membership layer creates a fuzzy membership set
parameters for the 2x3x1 FNN. The training of these
for each input, and the determination of the size of each
parameters is an important aspect to the usefulness of the
membership set is customizable. A membership size of three is
FNN. For further notation across training algorithms, a state
chosen. With more memberships per input the FNN has a finer
granularity to make decisions, but requires more parameters. vector 𝑥(𝑛) shall consist of the parameters at a given time
The rules layer specifies rules about a truth-table-like step 𝑛

0093-9994 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIA.2015.2468191, IEEE Transactions on Industry Applications

𝜕(𝑦(𝑛))
𝑤1 (𝑛) 𝜕(𝑤(𝑛))
... 𝜕(𝑦(𝑛))
𝑤9 (𝑛) ℎ(𝑛) = 𝜕(𝑚(𝑛))
(12)
𝑚1 (𝑛) 𝜕(𝑦(𝑛))
𝑥(𝑛) = ... (5) [ 𝜕(𝜎(𝑛)) ]
𝑚6 (𝑛)
𝜎1 (𝑛) where 𝑦 is the output of the FNN. Note that ℎ has length 21
for the 9 weight partials and the 6 partials each of means and
... sigmas. Deriving these partials is similar to the derivation
[ 𝜎6 (𝑛) ] found in the back-propagation method but without an error
function. The partial of the output with respect to the weights,
Then, notationally wrap the evaluations of eqs. (1)-(5) for state sigmas, and means is straightforward
vector 𝑥(𝑛)and input vector 𝑢(𝑛)as The next step of the EKF is to define an innovations
process
𝑦(𝑛) = 𝑓(𝑥(𝑛), 𝑢(𝑛)) (6) 𝑧˜(𝑛) = 𝑦 𝑑 (𝑛) − 𝑦(𝑛) (13)
A few more parameters need to be defined.
B. Learning Methods  Variable𝑃(𝑛) is the state error covariance matrix and
B.1 Backpropagation Training Algorithm is initialized to the identity matrix. Because 𝑥(𝑛)is
The BP method trains the parameters of the FNN to [21x1] in this FNN application then 𝑃(𝑛)is [21x21].
match input sets to an output set using the partial derivatives  Parameter 𝑅 is the output noise characteristic.
of each parameter to an error function. Given a desired output Because the FNN is a single output this parameter is
and a FNN output, define the following error function for BP a scalar. The value may be tuned by the engineer, but
training: based upon 𝐸[𝑣(𝑛)𝑣(𝑛)]where 𝑣(𝑛)is the difference
𝑒(𝑛) = (0.5)(𝑦 𝑑 (𝑛) − 𝑦(𝑛))2 (7) of the desired output minus and the FNN output.
 Parameter 𝑄 is the system state noise characteristic.
The general idea of BP training is to update each parameter of Because the FNN has 21 states this parameter is a
the FNN with the following rule based upon learning rate [21x21] matrix. The value may be tuned by the
𝜕𝑒(𝑛)
𝑥𝑖 (𝑛 + 1) = 𝑥𝑖 (𝑛) − 𝜂𝑖 𝜕𝑥 (𝑛) (8) engineer, but is based upon 𝐸[𝑒(𝑛)𝑒(𝑛)𝑇 ] where
𝑖
𝑒(𝑛) is the difference of the next iteration state
Thus, in order to update the FNN, finding the partial
vector𝑥(𝑛 + 1)minus the state vector 𝑥(𝑛).
derivatives of the error with respect to each weight, mean, and
Finally, the recursion formulas are the following for the EKF
sigma is necessary. The partial of the error function with
[13]:
respect to the weights, means and sigmas is straightforward.
The Kalman Gain matrix equation
Finally, apply the partials of the error with respect to 𝑃(𝑛)ℎ(𝑛)
each parameter to the back-propagation update formula of (8) 𝑘(𝑛) = (14)
𝑅+ℎ𝑇 (𝑛)𝑃(𝑛)ℎ(𝑛)

𝜕𝑒(𝑛)
𝑤𝑘 (𝑛 + 1) = 𝑤𝑘 (𝑛) − 𝜂𝑘𝑤 𝜕𝑤 (9) The state equation based on the innovations
𝑘 (𝑛)
𝑥(𝑛 + 1) = 𝑥(𝑛) + 𝑘(𝑛)𝑧˜(𝑛) (15)
𝑚 𝜕𝑒(𝑛)
𝑚𝑗 (𝑛 + 1) = 𝑚𝑗 (𝑛) − 𝜂𝑗 (10)
𝜕𝑚𝑗 (𝑛)
The state error covariance equation
𝜎 𝜕𝑒(𝑛)
𝜎𝑗 (𝑛 + 1) = 𝜎𝑗 (𝑛) − 𝜂𝑗 𝜕𝜎 (𝑛) (11) 𝑃(𝑛 + 1) = 𝑃(𝑛) − 𝑘(𝑛)ℎ𝑇 (𝑛)𝑃(𝑛) + 𝑄 (16)
𝑗
where 𝜂𝑘𝑤 , 𝜂𝑗𝑚 and 𝜂𝑗𝜎 are
learning rates. The learning rates may
Also note, on a periodic interval the state error covariance
be tuned by the user. The application of (9-11) over a set of
matrix may need to be reset to the identity matrix to avoid
inputs and desired outputs trains the FNN.
numerical instability.
𝑃(𝑛 + 1) = 𝐼 (17)
B.2 EKF Learning Algorithm
The EKF method trains the parameters of the FNN to
B.3 GEN and PSO Learning Algorithms
match inputs sets to an output set by treating the FNN as a
The GEN method treats the parameters of the FNN as a
non-linear stationary process. The first step in the EKF method
chromosome, then initializes a population of these
is to recall the state vector from 𝑥(𝑛)from eq.5. This vector
chromosomes, and evaluates each chromosome under
serves as the baseline for the recursive formulation of the EKF.
performance for a cost. Then, at a reproductive stage, a new
The next step is the definition of the output gradient vector
population is generated by the selection, crossover, and
with respect to each state
mutation of chromosomes from the previous population.
The PSO method treats the parameters of the FNN as a
particle, then initializes a swarm of particles, and evaluates

0093-9994 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIA.2015.2468191, IEEE Transactions on Industry Applications

each particle under performance for a cost. Then, at an update An experiment tests the learning of a 2x3x1 FNN as
stage, a new swarm is generated by guiding the current swarm defined in section II using the four different learning
towards the best overall particle and the individual best algorithms. The experiment attempts to pattern match a
performances of each particle along with a momentum training set of inputs to outputs through the FNN in software
component. simulation. The results of each algorithm performance
constitute the basis of analysis for comparing the learning
1. Define a population 𝑋(𝑛)at time step 𝑛 of size 𝑁 chromo- methods. The MATLAB/Simulink environment provides the
somes programming for the tests. Using MATLAB functions and
𝑋(𝑛) = [𝑥1 (𝑛) 𝑥2 (𝑛) . . . 𝑥𝑁 (𝑛)] (18) scripts, the following: the execution of a FNN on a parameter
2. Evaluate a fitness function which quantifies performance. set and inputs; the execution of the training methods; and
3. Apply an evolutionary or swarm training method as fol- plotting utilities; enables for the comparison of the training
lows: methods.
A benchmark test demonstrates a measure of
 Genetic: Create a new population by, for each new
effectiveness for each training algorithm as a tool for
entry in the new population, select two from the old improving the performance of a FNN for pattern matching. An
(based on fitness), cross them, and perform mutation. epoch of input training data must be defined first as a standard
across all algorithms. Three input values are defined for each
 Particle Swarm: Create a new swarm based on attrac- input: -0.5, 0, and +0.5. Since there are two inputs, this test
tion to the best overall fitness ever achieved, the best shall create a table of input pairs across these three values,
performance of each vector “particle”, and a momen- creating a set of nine input pairs: {(-0.5,-0.5) , (-0.5,+0.0), (-
tum of rate of change 0.5,+0.5) , (+0.0,-0.5) , (+0.0,+0.0) , (+0.0,+0.5) ,(+0.5,-0.5),
(+0.5,+0.0) , (+0.5,+0.5)}. An arbitrary non-linear function
II. TESTS PERFORMANCE f(u1,u2) creates a sample desired output y value for each input
The laboratory setup is similar to that reported by the pairing:
authors in earlier work [14]. The hardware setup realizes the y =– u1 – u2 + u1 u1 + u2 u2 + u1 u2 (19)
theoretical design and supports a testing environment for
operation. The basic components are the motor drive system, Recall that at a time step the FNN has a state vector
the DSP board, and the PC. The driving board is a Moog of parameters. The training of these parameters allows for the
T200-410[15]. The driven motor is a 1-hp 3000 r/min three FNN to pattern match an input pair to an output value. Any
phase brushless DC servomotor [16]. A variable auto- comparative analysis of training algorithms depends on the
transformer provides the driving power to the driving circuit initial state of this vector – or the initial “guess” of the FNN as
with 230V AC. A 24V power supply supports the functional to how to pattern match. The closer the initial state is to
board electronics. The driving board interfaces with the DSP matching the desired pattern, the less training is necessary. The
providing a resolver measurement and accepting a control initial state selection for this experiment requires selecting 9
command. The board also interfaces with the PC which hosts weights (for the output layer), 6 means (for the membership
the WinDrive software package which can turn the motor layer), and 6 sigmas (for the membership layer). The selection
on/off and provides basic control configurations. The DSP of the means and sigmas for this test shall be a smart initial
board is the dSPACE DS1104 DSP board [17]. Fig. 3 displays guess, in that the initial means spread the memberships across
a snapshot of the laboratory setup. the expected inputs. The selection of the initial weights shall
Driving Circuit
be random between 0 and 1. If the initial points matched the
Variable Auto- desired points immediately, to within a margin of error, then
Torque
Transformer
Transducer there would be no further problem to explore. The
24 DC Logic BLDC
PMDC
42V DC Prog.
fundamental aspect to this section is the exploration of training
Power Supply Motor
Generator
PMDC Load Power Supply algorithms to match this pattern from this initial condition
Voltage benchmark. Table II details the initial states. A battery of tests
demonstrates the effectiveness of the learning methods. A test
of each method on the benchmark case is performed.
Parameter update occurs at every step for BP and EKF
DSP dSPACE
methods. The GEN and PSO methods shall update at the end
Torque Readout
Oscilloscope Board of each epoch of training data. The plot of the initial Gaussian
membership based upon the initial means and sigmas follow in
Figs. 4a and 4b.
For the BP test; Cycle 200 training epochs with 9
Simulink / Matlab steps per epoch. A parameter update occurs after each time
ControlDesk / dSPACE step. Fig. 5 details the accumulated error cost (squared error
between desired output and actual output) from each epoch of
Fig. 3 Photo of the laboratory test bench training. Notice how the error decreases from the large initial

0093-9994 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIA.2015.2468191, IEEE Transactions on Industry Applications

error to a smaller error and stabilizes. Fig. 6 plots the final


pattern of the final FNN state for matching the desired pattern,
in comparison to the initial pattern of the initial FNN state.
Note that the final and desired values are very close to each
other, and much improved over the initial. The BP test
demonstrates moderate success in achieving a pattern
matching FNN for the given input data. The Figs. 7a and 7b
show the membership functions of the membership sets of the
final FNN state, for inputs u1 and u2 respectively.

Fig. 5 BP FNN training performance

Fig. 4a FNN initial error signal memberships

For the EKF test, cycle 200 training epochs with 9


steps per epoch. Each epoch step tests an element of FNN
training data set. A parameter update occurs after each time
step. The Fig. 8 details the accumulated error cost (squared
error between desired output and actual output) from each
epoch of training. Notice how the error decreases to a
negligible amount close to zero. The Fig. 9 plots the final
pattern of the final FNN state for matching the desired pattern, Fig. 6 BP FNN initial and final pattern
in comparison to the initial pattern of the initial FNN state.
Note that the final and desired values effectively match. The
Figs. 10a and 10b display the membership functions of the
membership sets of the final FNN state, for inputs u1 and u2
respectively. The EKF test demonstrates success in achieving a
pattern matching FNN for the given input data.

Fig 7a BP final membership for input 1

Fig. 4b FNN initial delta signal memberships

0093-9994 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIA.2015.2468191, IEEE Transactions on Industry Applications

Fig. 10a EKF FNN final membership set for input 1


Fig 7b BP final membership for input 2

Fig. 10b EKF FNN final membership set for input 2

For the GEN test, start from the initial FNN condition
Fig. 8 EKF FNN training performance to create a population of 10 chromosomes; cycle 30 training
epochs for the 10 chromosomes with 9 steps per epoch per
chromosome, in essence making 90 steps per epoch. A
parameter update occurs at the end of each epoch. The Fig. 11
details the accumulated error cost (squared error between
desired output and actual output) from each epoch of training
by chromosome. Notice how the error moves about but then
stabilizes as the genetic algorithm converges. The Fig. 12 plots
the final pattern of the final FNN state for matching the
desired pattern, in comparison to the initial pattern of the
initial FNN state. Note that the final and desired values are not
very close to each other, perhaps slightly improved over the
initial. The Figs. 13a and 13b show the membership functions
of the membership sets of the final FNN state, for inputs 1 and
2 respectively. The GEN test demonstrates very little success
in achieving a pattern matching FNN for the given input data.

Fig. 9 EKF FNN initial and final pattern

0093-9994 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIA.2015.2468191, IEEE Transactions on Industry Applications

Fig. 13b GEN FNN Final membership set for input 2


Fig. 11 Genetic FNN training performance
For the PSO test, start from the initial FNN condition
from Table IX to create a swarm of 10 particles; cycle 30
training epochs for the 10 particles with 9 steps per epoch per
particle, in essence making 90 steps per epoch. Each epoch
step tests an element of FNN training data set. A parameter
update occurs at the end of each epoch. The Fig. 14 details the
accumulated error cost (squared error between desired output
and actual output) from each epoch of training by particle.
Notice how the error moves about but then stabilizes to a large
error as the genetic algorithm converges. The Fig. 15 plots the
final pattern of the final FNN state for matching the desired
pattern, in comparison to the initial pattern of the initial FNN
state. Note that the final and desired values are not close to
each other, perhaps slightly improved over the initial. The
PSO test demonstrates very little success in achieving a
pattern matching FNN for the given input data. The Figs. 16a
and 16b illustrate the membership functions of the
Fig. 12 Genetic FNN initial and final pattern
membership sets of the final FNN state, for inputs u1 and u2
respectively

Fig. 13a GEN FNN final membership set for input 1


Fig. 14 PSO FNN training performance

0093-9994 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIA.2015.2468191, IEEE Transactions on Industry Applications

matches the desired pattern. The GEN training ineffectively


trains the pattern matching, stabilizing to a poor solution
within 5 epochs (but note that the epoch is 10 times the size of
a gradient epoch for BP or EKF). The final GEN pattern does
not match the desired pattern. The PSO training ineffectively
trains the pattern matching, roughly stabilizing to a poor
solution within 10 epochs (but note that the epoch is 10 times
the size of a gradient epoch for BP or EKF). The final pattern
PSO does not match the desired pattern. The EKF training
method is the best training method of the four under test for
this exercise, with BP performing admirably well as a second
best, while the GEN and PSO methods are unsuccessful.
The EKF method performs most successfully in this
benchmark, but there are benefits and costs to each method.
For one, the EKF method is computationally intense. The BP
performs fewer operations per iteration, even if performance is
Fig. 15 PSO FNN initial and final pattern not as good as the EKF. Furthermore, the use of evolutionary
algorithms in this benchmark is not emblematic of all possible
uses. A greater population size, different initial condition, or
modified mutation/swarm algorithm might render the GEN
and PSO methods more competitive.

IV. LABORATORY IMPLEMENTATION OF


LEARNING ALGORITHMS-BASED FNN PI-AND PD-
TYPE CONTROLLER
The laboratory “Implementation of FNN controller”
involves controlling a BLDC motor drive system using a PI/PD
FNN controller system. The PI/PD controller mimics PID
control and composed two FNN structures. The PI FNN
generates a delta control signal as an output that sums with a
state parameter of the previous PI control signal to create the
current PI control signal. The PD FNN generates a PD control
signal directly. The summation of the PI and PD control signals
Fig. 16a PSO FNN Final membership set for input 1 creates the control signal to send to the motor/load. Each FNN
receives as input the reference error and the derivative of the
reference error with respect to time. As seen in PID control, the
integral and derivative of the error signal are created as part of
the control law. However, instead of PID control, the control
law is built upon fuzzy logic. Fig. 17 shows the FNN PI/PD
control structure. Details of the control structure are similar to
that reported in earlier work [13] and are not repeated in this
paper. The nominal test case shall demonstrate success and
further test cases may demonstrate strengths and weaknesses of
the controller. Four training methods to update the parameters
of the FNN are examined. The backpropagation (BP) method is
common, and the Extended Kalman Filter (EKF) method is
also popular. Additionally, genetic (GEN) and the Particle
Swarm Optimization (PSO) methods show promise towards
training a FNN. The “Examination of FNN Training Methods”
Fig. 16b PSO FNN final membership set for input 2 shall involve comparing these four different training methods
to train a randomly initialized 2x3x1 FNN. The test data shall
be a table of 3 input values per input, making 9 input pairings
corresponding to 9 output values. The FNN should match the
The testing results show a range of success for FNN
input pattern to the output after training is complete. The BP
pattern matching training. The BP training effectively trains and EKF methods shall update at each iteration. The GEN and
the pattern matching within 100 epochs to a reasonable error PSO methods shall update after the end of each 9 input cycle.
level. The final BP pattern approximately matches the desired The results shall be examined for the quality of the pattern
pattern. The EKF training effectively trains the pattern matching and the speed at which the training occurred. Further
matching within 25 epochs. The final pattern EKF closely

0093-9994 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIA.2015.2468191, IEEE Transactions on Industry Applications

conclusions shall be made as to specific strengths of each


algorithm.

Fig 18a Speed tracking via EKF-based FNN controller


Fig. 17 FNN PD-and PI control structure

A. Experimental Results
Several test cases were completed in the laboratory to
evaluate the performance of the FNN controller-based four
learning algorithms. However, only salient results are reported
in this paper. The weights and membership functions of the
FNN were chosen randomly and then trained using the four
training algorithms. The algorithms trained over several
iterations window by modifying the weights and membership
functions, which are used during FNN learning to improve the
tracking performance of the motor speed. In all cases, the
actual speed is superimposed on the desired reference speed in
order to compare the tracking accuracy. To test the load
characteristics of the FNN controller, a PM dc generator that Fig. 18b Error and delta error signals during training
produced torque proportional to the speed was coupled with the
BLDC motor through a torque transducer. Plots captured in the For a comparison purpose, Figs. 19(a) and (b) exhibit a case
laboratory using dSPACE DSP Control Desk validate each of where the EKF algorithm was replaced by a PSO learning
these test results. algorithm. Clearly, the system response is unsatisfactory. It
In the first test, a sinusoidal-wave trajectory with amplitude shows that there is significant degradation in the tracking
of 1000 rpm was considered. The motor was operating under performance. The PSO algorithm is unable to adjust the values
radial load in the clockwise or positive rotating direction. The of the adjustable parameters (weights and membership
results are summarized in Figs 18(a) and (b). Fig. 18a shows functions) of the FNN from a random initial state to display
the tracking performance of the rotor speed under EKF learning similar tracking to that of Fig 18(a). There is a phase shift in
algorithm. It is observed that the proposed controller brings the the response and the actual speed lags the desired speed by
actual speed to the desired value smoothly and with a small relatively large time-varying angle. Clearly, the tracking
amount of steady-state error but without the overshoot or performance-based PSO algorithm exhibits a major amount of
oscillatory response. The test also shows that for arbitrary steady-state error as the measured speed approaches the desired
initial conditions, the training can improve the tracking speed. Increasing sampling rates might, thus, may produce
performance. This is a practically key finding since the values better training due to a higher amount of information. Further
of the adjustable parameters (weights and membership
training will result in better tracking accuracy. The error signals
functions) of the FNN are initialized randomly and the EKF
training algorithm is able to adjust these weights and for this tracking are shown in Fig. 19(b). It provides an
membership functions gradually to their accepted values. indication of how poorly the PSO algorithm fails in generating
Clearly, the controller-based EKF algorithm eliminates both a control signal that forces the actual speed to follow the
overshoot and extent of oscillations under loading condition. desired reference speed at all time. The results illustrated in Fig
Furthermore, the torque is observed to spike when the speed 18(a) and (b) are clearly superior to those shown in Figs 19(a),
changes direction from one level to another. Fig. 18(b) shows and (b).
the progress of error signals during learning process.
The above test was repeated with GEN learning algorithm
running under the same conditions and the results obtained are
similar to those shown in Fig. 19a. Because of the matching

0093-9994 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIA.2015.2468191, IEEE Transactions on Industry Applications

test results, we also conducted the experimental test using a


square-wave trajectory with the same amplitude of 1000 rpm to
distinguish its performance from the earlier experiment that
utilized the sinusoidal-wave trajectory. The results are
summarized in Figs. 20(a) and (b). Fig. 20(a) illustrates the
tracking performance of the specified square-wave trajectory.
Fig 20 (b) displays the progress of error signals during learning
process. The GEN test demonstrates very little success in
achieving a desirable tracking performance. The up-to-date
investigation is directed on more developing the GEN and PSO
methods considered in this paper and applying them in
laboratory environments to improve the tracking performance
of BLDC motor drives. More work is needed in defining an
effective use of the GEN and PSO learning methods towards
training a FNN. Perhaps only a subset of the parameters of the
FNN shall be trained via these algorithms, or perhaps better Fig. 19b Error and delta error signals during training
tuning of the algorithm parameters would yield more positive
results.

Finally, to display the efficacy of the learning algorithms


further, the POS learning algorithm was replaced by a BP
algorithm. The corresponding trajectories of the actual speed
and the motor voltage are plotted in Figs. 21(a) and (b). Fig.
21(a) shows the test response obtained by a BP algorithm; it
shows a response similar to that of Fig 18a (tracking under
EKF) but with oscillations and overshoot. Furthermore, there is
a discrepancy in the phase response. These oscillations and
phase distortions in the response might be due to either slow
learning or some kind of memory effect. It is to be noted that
the EKF-based algorithm offers healthier response in terms of
oscillations, phase distortions, and overshoot. Fig. 21(b)
displays the motor voltage sequence generated by the BP
controller network during a segment of the learning. The BP
controller exhibits oscillations around steady-state and some Fig 20a Speed tracking via GEN-based FNN controller
overshoot.

Fig. 20b Error and delta error signals during training


Fig 19a Speed tracking via PSO-based FNN controller

0093-9994 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIA.2015.2468191, IEEE Transactions on Industry Applications

admirably well as a second best, while the GEN and PSO


methods are unsuccessful. The EKF method performs most
successfully in this benchmark, but there are benefits and costs
to each method. For one, the EKF method is computationally
intense. The BP performs fewer operations per iteration, even
if performance is not as good as the EKF. Furthermore, the use
of evolutionary algorithms in this benchmark is not
emblematic of all possible uses. A greater population size,
different initial condition, or modified mutation/swarm
algorithm might render the GEN and PSO methods more
competitive.
More work is needed in defining an effective use of
the GEN and PSO methods towards training a FNN. Perhaps
only a subset of the parameters of the FNN shall be trained via
these algorithms, or perhaps better tuning of the algorithm
Fig. 21a Speed tracking via BP-based FNN controller parameters would yield more positive results.

REFERENCES
[1] B. M. Hohan and A. Sinha, “Analytical Structure and
Stability Analysis of a Fuzzy PID Controller,” Applied
Soft Computing, vol. 8, pp. 749-758, 2008

[2] Bao-Gang HU, G. K. I. Mann, and R. Gosine, “A


Systematic Study of Fuzzy PID Controllers – Function-
Based Evaluation Approach,” IEEE Transactions on
Fuzzy Systems, vol. 9, no. 5, pp. 699-712, Oct 2001.

[3] A. Rubaai, M. J. Castro-Sitiriche, and A. R. Ofoli,


“Design and Implementation of Parallel Fuzzy PID
Controller for High-Performance Brushless Motor
Drives: An Integrated Environment for Rapid Control
Prototyping,” IEEE Transactions on Industry
Applications, vol. 44, no. 4, pp. 1090-1098, July/Aug
Fig. 21b Control signal for the BP learning algorithm
2008.

[4] A. Sant, K. R. Rajagopal, “PM Synchronous Motor


V. CONCLUSIONS
Speed Control Using Hybrid Fuzzy-PI With Novel
In this paper, a study of the different training methods
Switching Functions,” IEEE Transactions on Magnetics,
for a FNN given a benchmark training set was presented. The
vol. 45, no. 10, pp. 4672-4675, October 2009.
study defines four different training algorithms for the FNN:
Backpropagation (BP), Extended Kalman Filter (EKF),
[5] A. Rubaai, D. Ricketts and M. D. Kankam,
Genetic (GEN), and Particle Swarm Optimization (PSO).
“Experimental Verification of a Hybrid Fuzzy Control
Tests of each learning algorithm by a pattern matching
Strategy for a High Performance Brushless DC Drive
benchmark occur in MATLAB Simulink environment. Testing
System,” IEEE Transactions on Industry Applications,
results showed that the EKF training algorithm was the
Vol. 37, No. 2, pp. 503-512, March/April 2001
superior method of the four for this specific application, with
the BP coming in a respectable second. The GEN and PSO
[6] Yi-Pinl Kuo, and Tzuu-Hseng S. Li, “GA-Based Fuzzy
methods were ineffective in training the FNN to match the
PI/PD Controller for Automotive Active Suspension
benchmark pattern in the tests performed. If the processing
System,” IEEE Transactions on Industrial Electronics,
demands of the EKF are not available to the user, the BP
vol. 46, no. 6, pp. 1051-1056, Dec 1999.
method is slower and with more final error but also effective.
The BP method was also somewhat successful, nearly
[7] M. Masiala, B. Vafakhah, J. Salmon, and A. M. Knight,
matching the pattern but not to the accuracy of the EKF.
“Fuzzy Self-Tuning Speed Control of an Indirect Field-
Unfortunately, neither training design demonstrates much
Oriented Control Induction Motor Drive,” IEEE
success compared to the BP and EKF techniques
Transactions on Industrial Applications, vol. 44, no. 6,
The EKF learning method is the best training method
pp. 1732-1740, Nov/Dec 2008.
of the four under test for this exercise, with BP performing

0093-9994 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIA.2015.2468191, IEEE Transactions on Industry Applications

[8] A. Rubaai, M. J. Castro-Sitiriche, and A. R. Ofoli, “DSP-


Based Laboratory Implementation of Hybrid Fuzzy-PID
Controller Using Genetic Optimization for High
Performance Motor Drives,” IEEE Transactions on
Industry Applications, vol. 44, no. 6, pp. 1977-1986,
Nov/Dec 2008.

[9] M. N. Uddin; M. A. Abido; M.A. Rahman, “Real-Time


Performance Evaluation of a Genetic-Algorithm-Based
Fuzzy Logic Controller for IPM Motor Drives,” IEEE
Transactions on Industry Applications, vol 41, no. 1, pp.
246-252, Jan/Feb 2005.

[10] Faa-Jeng Lin, Li-Tao Teng, Jeng-Wen Lin, and Syuan-Yi


Chen, “Recurrent Functional-Link-Based-Neural-
Network-Controlled Induction-Generator System Using
Improved Particle Swarm Optimization,” IEEE
Transactions on Industrial Electronics, vol. 56, no. 5, pp.
1557-1577, May 2009.

[11] Faa-Jeng Lin; Li-Tao Teng; Hen Chu, “A Robust


Recurrent Wavelet Neural Network Controller With
Improved Particle Swarm Optimization for Linear
Synchronous Motor Drive,” IEEE Transactions on Power
Electronics, vol. 23, no. 6, pp. 3067-3078, Nov 2008.

[12] A. Rubaai, D. Ricketts and M. D. Kankam,


“Development and Implementation of an Adaptive
Fuzzy-Neural-Network Controller for Brushless Drives,”
IEEE Transactions on Industry Applications, Vol. 38, No.
2, pp. 441-447, March/April 2002

[13] A. Rubaai and P. Young, “EKF-Based PI/PD-Like Fuzzy


Neural Network Controller for Brushless Drives,” IEEE
Transactions on Industry Applications, Vol. 47, No. 6,
pp. 2391-2401, Nov./Dec 2011

[14] A. Rubaai, J. Jerry, and S. Smith, “Performance


Evaluation of Fuzzy Switching Position Controller for
Automation and Process Industry Control,” IEEE
Transactions on Industry Applications, Vol. 47, No. 5,
pp. 2274-2282, Sept/Oct 2011.

[15] dSPACE User’s Guide, Digital Signal Processing and


Control Engineering, dSPACE, Paderborn, Germany,
2003.

[16] G413-817 Technical Data Manual, Moog Aerospace,


East Aurora, New York, 2000

[17] T200-410 Technical Data Manual, Moog Aerospace,


East Aurora, New York, 2000

0093-9994 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Potrebbero piacerti anche