Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIA.2015.2468191, IEEE Transactions on Industry Applications
Abstract— This paper presents a study of different Fuzzy Neural research and industry communities [1]. Several successful
Network (FNN) learning control methods for Brushless DC Motor applications of FL control to improve the popular PID
(BLDC) Drives. The FNN combines fuzzy logic (FL) with the controller are evident in the research community today to
learning capabilities of an Artificial Neural Network. The study build advanced and practical control systems [2-5]. Fuzzy
designs a FNN structure and defines four different training
logic control is robust and non-combinatorial, but may lead to
algorithms for the FNN: Backpropagation (BP), Extended
Kalman Filter (EKF), Genetic (GEN), and Particle Swarm poorly rough granulation and non-adaptation. Nonetheless, FL
Optimization (PSO). These algorithms are examined in the simple control has various drawbacks. The difficulties descend from
application of pattern matching an input set to an output set and the extraction and adjustment of fuzzy rules from input/output
determine the strengths and weaknesses of each algorithm. Tests data without the provision of on-line adaptive and learning
of each learning algorithm by a pattern matching benchmark are methods. In order to resolve these difficulties, numerous
achieved via dSPACE DSP MATLAB/Simulink environment and advanced FL control schemes are introduced, for enhanced
allows for the capability for adaptive self-tuning of the weights PID control of motor drives, including hybrid-fuzzy, genetic-
and memberships of the input parameters. Thus, adds a self- fuzzy and fuzzy-neural-network control strategies [6-11]. With
learning capability to the initial fuzzy design for operational
reinforcement adaptive/learning mechanism, a FL self-learning
adaptively, and implements the solution on real hardware using a
BLDC motor drive system. The success of the adaptive FNN controller can efficiently learn the control law for complex
controlled BLDC motor drive system is verified by experimental nonlinear ac machine drives. Treating the controller as a
results. Testing results show that the EKF method is the superior process and deriving appropriate derivatives allows for
method of the four for this specific application. The BP method adaptive update to the FL components. The results are very
was also somewhat successful, nearly matching the pattern but satisfactory and displayed a high dynamic performance. The
not to the accuracy of the EKF. The GEN method and PSO combination of FL controller with the structure of neural
methods did not demonstrate success. Demonstrating the network makes use of the advantages of both the learning
proposed self-learning FNN control on real hardware realizes the capability of neural network and the robustness of fuzzy logic
solution.
control that ought to compensate the disadvantages of
Index Terms- Learning methods, FNN, real-time control, modification of fuzzy control rules or models. Additionally,
MATLAB/Simulink, industrial drives FNN-based control is found to be a powerful tool for the
solution of nonlinear control problems, and is employed to
I. INTRODUCTION generate and tune fuzzy rules for AC motor drives [12].
The control problem is to actuate system behavior Most of the methods developed so far have some
according to a desired command. Automated control systems drawbacks to generalization and formulation, so the
solve the control problem with minimal human intervention. parralization and hybridization are beneficial for these
The PID controller is a common industrial control strategy for approaches to improve the learning capability and operation of
electrical feedback systems. Two problems exist with the high performance motor drives. The Fuzzy Neural Network
typical PID control response: (1) a tendency to initially (FNN) has acceptance today for application in adaptive control
overshoot step reference changes; and (2) a tendency to of motor drive systems. The ability of the FNN to train itself to
oscillate towards steady state. Separately, another typical create a control law or a parameter estimation profile is the
problem with many control solutions is a lack of adaptively of promise of the future, so the training algorithms for FNN are
the controller to system changes over time. Additionally, many crucially important to successful application of the FNN. The
controls algorithms are interesting in theory but lack a objective of this study is to examine four different learning
practical implementation example. Fuzzy logic (FL) control algorithms: (1) Backpropagation (BP); (2) Extended Kalman
has emerged as one of the most effective nonlinear control Filter (EKF); (3) Genetic (GEN); and (4) Particle Swarm
methods, includes analytical stability analysis for bounded Optimization (PSO) in the simple application of pattern
inputs, important for fuzzy systems to gain acceptance among matching an input set to an output set and determine the
0093-9994 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIA.2015.2468191, IEEE Transactions on Industry Applications
strengths and weaknesses of each method. Tests of each matching of the membership sets of each input. The output
learning algorithm by a pattern matching benchmark occur in layer performs defuzzification by forming a single signal from
MATLAM/Simulink environment. The test results allow for the rules.
comparison of each algorithm performance. The FNN The FNN consists of a 2-input 1-output structure with
structure is introduced to allow for training algorithms, three memberships per input, thus called a 2x3x1 FNN. The
particularly the incorporation of the most promising method of membership layer consists of Gaussian membership functions
the four as a learning algorithm for industrial drives. A parameterized by a mean and sigma each, and the output layer
dSPACE DSP board provides controller I/O interfaces and consists of a parameterized weighted summation. Refer to
hosts the control algorithm developed in MATLAB/Simulink Fig.2 for a diagram of the FNN structure.
and auto-compiled into an embedded control process. To the
best knowledge of the authors, the FNN membership
parameter vector update via PSO and GEN is not seen in
literature.
0093-9994 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIA.2015.2468191, IEEE Transactions on Industry Applications
𝜕(𝑦(𝑛))
𝑤1 (𝑛) 𝜕(𝑤(𝑛))
... 𝜕(𝑦(𝑛))
𝑤9 (𝑛) ℎ(𝑛) = 𝜕(𝑚(𝑛))
(12)
𝑚1 (𝑛) 𝜕(𝑦(𝑛))
𝑥(𝑛) = ... (5) [ 𝜕(𝜎(𝑛)) ]
𝑚6 (𝑛)
𝜎1 (𝑛) where 𝑦 is the output of the FNN. Note that ℎ has length 21
for the 9 weight partials and the 6 partials each of means and
... sigmas. Deriving these partials is similar to the derivation
[ 𝜎6 (𝑛) ] found in the back-propagation method but without an error
function. The partial of the output with respect to the weights,
Then, notationally wrap the evaluations of eqs. (1)-(5) for state sigmas, and means is straightforward
vector 𝑥(𝑛)and input vector 𝑢(𝑛)as The next step of the EKF is to define an innovations
process
𝑦(𝑛) = 𝑓(𝑥(𝑛), 𝑢(𝑛)) (6) 𝑧˜(𝑛) = 𝑦 𝑑 (𝑛) − 𝑦(𝑛) (13)
A few more parameters need to be defined.
B. Learning Methods Variable𝑃(𝑛) is the state error covariance matrix and
B.1 Backpropagation Training Algorithm is initialized to the identity matrix. Because 𝑥(𝑛)is
The BP method trains the parameters of the FNN to [21x1] in this FNN application then 𝑃(𝑛)is [21x21].
match input sets to an output set using the partial derivatives Parameter 𝑅 is the output noise characteristic.
of each parameter to an error function. Given a desired output Because the FNN is a single output this parameter is
and a FNN output, define the following error function for BP a scalar. The value may be tuned by the engineer, but
training: based upon 𝐸[𝑣(𝑛)𝑣(𝑛)]where 𝑣(𝑛)is the difference
𝑒(𝑛) = (0.5)(𝑦 𝑑 (𝑛) − 𝑦(𝑛))2 (7) of the desired output minus and the FNN output.
Parameter 𝑄 is the system state noise characteristic.
The general idea of BP training is to update each parameter of Because the FNN has 21 states this parameter is a
the FNN with the following rule based upon learning rate [21x21] matrix. The value may be tuned by the
𝜕𝑒(𝑛)
𝑥𝑖 (𝑛 + 1) = 𝑥𝑖 (𝑛) − 𝜂𝑖 𝜕𝑥 (𝑛) (8) engineer, but is based upon 𝐸[𝑒(𝑛)𝑒(𝑛)𝑇 ] where
𝑖
𝑒(𝑛) is the difference of the next iteration state
Thus, in order to update the FNN, finding the partial
vector𝑥(𝑛 + 1)minus the state vector 𝑥(𝑛).
derivatives of the error with respect to each weight, mean, and
Finally, the recursion formulas are the following for the EKF
sigma is necessary. The partial of the error function with
[13]:
respect to the weights, means and sigmas is straightforward.
The Kalman Gain matrix equation
Finally, apply the partials of the error with respect to 𝑃(𝑛)ℎ(𝑛)
each parameter to the back-propagation update formula of (8) 𝑘(𝑛) = (14)
𝑅+ℎ𝑇 (𝑛)𝑃(𝑛)ℎ(𝑛)
𝜕𝑒(𝑛)
𝑤𝑘 (𝑛 + 1) = 𝑤𝑘 (𝑛) − 𝜂𝑘𝑤 𝜕𝑤 (9) The state equation based on the innovations
𝑘 (𝑛)
𝑥(𝑛 + 1) = 𝑥(𝑛) + 𝑘(𝑛)𝑧˜(𝑛) (15)
𝑚 𝜕𝑒(𝑛)
𝑚𝑗 (𝑛 + 1) = 𝑚𝑗 (𝑛) − 𝜂𝑗 (10)
𝜕𝑚𝑗 (𝑛)
The state error covariance equation
𝜎 𝜕𝑒(𝑛)
𝜎𝑗 (𝑛 + 1) = 𝜎𝑗 (𝑛) − 𝜂𝑗 𝜕𝜎 (𝑛) (11) 𝑃(𝑛 + 1) = 𝑃(𝑛) − 𝑘(𝑛)ℎ𝑇 (𝑛)𝑃(𝑛) + 𝑄 (16)
𝑗
where 𝜂𝑘𝑤 , 𝜂𝑗𝑚 and 𝜂𝑗𝜎 are
learning rates. The learning rates may
Also note, on a periodic interval the state error covariance
be tuned by the user. The application of (9-11) over a set of
matrix may need to be reset to the identity matrix to avoid
inputs and desired outputs trains the FNN.
numerical instability.
𝑃(𝑛 + 1) = 𝐼 (17)
B.2 EKF Learning Algorithm
The EKF method trains the parameters of the FNN to
B.3 GEN and PSO Learning Algorithms
match inputs sets to an output set by treating the FNN as a
The GEN method treats the parameters of the FNN as a
non-linear stationary process. The first step in the EKF method
chromosome, then initializes a population of these
is to recall the state vector from 𝑥(𝑛)from eq.5. This vector
chromosomes, and evaluates each chromosome under
serves as the baseline for the recursive formulation of the EKF.
performance for a cost. Then, at a reproductive stage, a new
The next step is the definition of the output gradient vector
population is generated by the selection, crossover, and
with respect to each state
mutation of chromosomes from the previous population.
The PSO method treats the parameters of the FNN as a
particle, then initializes a swarm of particles, and evaluates
0093-9994 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIA.2015.2468191, IEEE Transactions on Industry Applications
each particle under performance for a cost. Then, at an update An experiment tests the learning of a 2x3x1 FNN as
stage, a new swarm is generated by guiding the current swarm defined in section II using the four different learning
towards the best overall particle and the individual best algorithms. The experiment attempts to pattern match a
performances of each particle along with a momentum training set of inputs to outputs through the FNN in software
component. simulation. The results of each algorithm performance
constitute the basis of analysis for comparing the learning
1. Define a population 𝑋(𝑛)at time step 𝑛 of size 𝑁 chromo- methods. The MATLAB/Simulink environment provides the
somes programming for the tests. Using MATLAB functions and
𝑋(𝑛) = [𝑥1 (𝑛) 𝑥2 (𝑛) . . . 𝑥𝑁 (𝑛)] (18) scripts, the following: the execution of a FNN on a parameter
2. Evaluate a fitness function which quantifies performance. set and inputs; the execution of the training methods; and
3. Apply an evolutionary or swarm training method as fol- plotting utilities; enables for the comparison of the training
lows: methods.
A benchmark test demonstrates a measure of
Genetic: Create a new population by, for each new
effectiveness for each training algorithm as a tool for
entry in the new population, select two from the old improving the performance of a FNN for pattern matching. An
(based on fitness), cross them, and perform mutation. epoch of input training data must be defined first as a standard
across all algorithms. Three input values are defined for each
Particle Swarm: Create a new swarm based on attrac- input: -0.5, 0, and +0.5. Since there are two inputs, this test
tion to the best overall fitness ever achieved, the best shall create a table of input pairs across these three values,
performance of each vector “particle”, and a momen- creating a set of nine input pairs: {(-0.5,-0.5) , (-0.5,+0.0), (-
tum of rate of change 0.5,+0.5) , (+0.0,-0.5) , (+0.0,+0.0) , (+0.0,+0.5) ,(+0.5,-0.5),
(+0.5,+0.0) , (+0.5,+0.5)}. An arbitrary non-linear function
II. TESTS PERFORMANCE f(u1,u2) creates a sample desired output y value for each input
The laboratory setup is similar to that reported by the pairing:
authors in earlier work [14]. The hardware setup realizes the y =– u1 – u2 + u1 u1 + u2 u2 + u1 u2 (19)
theoretical design and supports a testing environment for
operation. The basic components are the motor drive system, Recall that at a time step the FNN has a state vector
the DSP board, and the PC. The driving board is a Moog of parameters. The training of these parameters allows for the
T200-410[15]. The driven motor is a 1-hp 3000 r/min three FNN to pattern match an input pair to an output value. Any
phase brushless DC servomotor [16]. A variable auto- comparative analysis of training algorithms depends on the
transformer provides the driving power to the driving circuit initial state of this vector – or the initial “guess” of the FNN as
with 230V AC. A 24V power supply supports the functional to how to pattern match. The closer the initial state is to
board electronics. The driving board interfaces with the DSP matching the desired pattern, the less training is necessary. The
providing a resolver measurement and accepting a control initial state selection for this experiment requires selecting 9
command. The board also interfaces with the PC which hosts weights (for the output layer), 6 means (for the membership
the WinDrive software package which can turn the motor layer), and 6 sigmas (for the membership layer). The selection
on/off and provides basic control configurations. The DSP of the means and sigmas for this test shall be a smart initial
board is the dSPACE DS1104 DSP board [17]. Fig. 3 displays guess, in that the initial means spread the memberships across
a snapshot of the laboratory setup. the expected inputs. The selection of the initial weights shall
Driving Circuit
be random between 0 and 1. If the initial points matched the
Variable Auto- desired points immediately, to within a margin of error, then
Torque
Transformer
Transducer there would be no further problem to explore. The
24 DC Logic BLDC
PMDC
42V DC Prog.
fundamental aspect to this section is the exploration of training
Power Supply Motor
Generator
PMDC Load Power Supply algorithms to match this pattern from this initial condition
Voltage benchmark. Table II details the initial states. A battery of tests
demonstrates the effectiveness of the learning methods. A test
of each method on the benchmark case is performed.
Parameter update occurs at every step for BP and EKF
DSP dSPACE
methods. The GEN and PSO methods shall update at the end
Torque Readout
Oscilloscope Board of each epoch of training data. The plot of the initial Gaussian
membership based upon the initial means and sigmas follow in
Figs. 4a and 4b.
For the BP test; Cycle 200 training epochs with 9
Simulink / Matlab steps per epoch. A parameter update occurs after each time
ControlDesk / dSPACE step. Fig. 5 details the accumulated error cost (squared error
between desired output and actual output) from each epoch of
Fig. 3 Photo of the laboratory test bench training. Notice how the error decreases from the large initial
0093-9994 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIA.2015.2468191, IEEE Transactions on Industry Applications
0093-9994 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIA.2015.2468191, IEEE Transactions on Industry Applications
For the GEN test, start from the initial FNN condition
Fig. 8 EKF FNN training performance to create a population of 10 chromosomes; cycle 30 training
epochs for the 10 chromosomes with 9 steps per epoch per
chromosome, in essence making 90 steps per epoch. A
parameter update occurs at the end of each epoch. The Fig. 11
details the accumulated error cost (squared error between
desired output and actual output) from each epoch of training
by chromosome. Notice how the error moves about but then
stabilizes as the genetic algorithm converges. The Fig. 12 plots
the final pattern of the final FNN state for matching the
desired pattern, in comparison to the initial pattern of the
initial FNN state. Note that the final and desired values are not
very close to each other, perhaps slightly improved over the
initial. The Figs. 13a and 13b show the membership functions
of the membership sets of the final FNN state, for inputs 1 and
2 respectively. The GEN test demonstrates very little success
in achieving a pattern matching FNN for the given input data.
0093-9994 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIA.2015.2468191, IEEE Transactions on Industry Applications
0093-9994 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIA.2015.2468191, IEEE Transactions on Industry Applications
0093-9994 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIA.2015.2468191, IEEE Transactions on Industry Applications
A. Experimental Results
Several test cases were completed in the laboratory to
evaluate the performance of the FNN controller-based four
learning algorithms. However, only salient results are reported
in this paper. The weights and membership functions of the
FNN were chosen randomly and then trained using the four
training algorithms. The algorithms trained over several
iterations window by modifying the weights and membership
functions, which are used during FNN learning to improve the
tracking performance of the motor speed. In all cases, the
actual speed is superimposed on the desired reference speed in
order to compare the tracking accuracy. To test the load
characteristics of the FNN controller, a PM dc generator that Fig. 18b Error and delta error signals during training
produced torque proportional to the speed was coupled with the
BLDC motor through a torque transducer. Plots captured in the For a comparison purpose, Figs. 19(a) and (b) exhibit a case
laboratory using dSPACE DSP Control Desk validate each of where the EKF algorithm was replaced by a PSO learning
these test results. algorithm. Clearly, the system response is unsatisfactory. It
In the first test, a sinusoidal-wave trajectory with amplitude shows that there is significant degradation in the tracking
of 1000 rpm was considered. The motor was operating under performance. The PSO algorithm is unable to adjust the values
radial load in the clockwise or positive rotating direction. The of the adjustable parameters (weights and membership
results are summarized in Figs 18(a) and (b). Fig. 18a shows functions) of the FNN from a random initial state to display
the tracking performance of the rotor speed under EKF learning similar tracking to that of Fig 18(a). There is a phase shift in
algorithm. It is observed that the proposed controller brings the the response and the actual speed lags the desired speed by
actual speed to the desired value smoothly and with a small relatively large time-varying angle. Clearly, the tracking
amount of steady-state error but without the overshoot or performance-based PSO algorithm exhibits a major amount of
oscillatory response. The test also shows that for arbitrary steady-state error as the measured speed approaches the desired
initial conditions, the training can improve the tracking speed. Increasing sampling rates might, thus, may produce
performance. This is a practically key finding since the values better training due to a higher amount of information. Further
of the adjustable parameters (weights and membership
training will result in better tracking accuracy. The error signals
functions) of the FNN are initialized randomly and the EKF
training algorithm is able to adjust these weights and for this tracking are shown in Fig. 19(b). It provides an
membership functions gradually to their accepted values. indication of how poorly the PSO algorithm fails in generating
Clearly, the controller-based EKF algorithm eliminates both a control signal that forces the actual speed to follow the
overshoot and extent of oscillations under loading condition. desired reference speed at all time. The results illustrated in Fig
Furthermore, the torque is observed to spike when the speed 18(a) and (b) are clearly superior to those shown in Figs 19(a),
changes direction from one level to another. Fig. 18(b) shows and (b).
the progress of error signals during learning process.
The above test was repeated with GEN learning algorithm
running under the same conditions and the results obtained are
similar to those shown in Fig. 19a. Because of the matching
0093-9994 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIA.2015.2468191, IEEE Transactions on Industry Applications
0093-9994 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIA.2015.2468191, IEEE Transactions on Industry Applications
REFERENCES
[1] B. M. Hohan and A. Sinha, “Analytical Structure and
Stability Analysis of a Fuzzy PID Controller,” Applied
Soft Computing, vol. 8, pp. 749-758, 2008
0093-9994 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TIA.2015.2468191, IEEE Transactions on Industry Applications
0093-9994 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See
http://www.ieee.org/publications_standards/publications/rights/index.html for more information.