Sei sulla pagina 1di 6

International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882,

Volume 3 Issue 1, April 2014

Variable Memory CMAC


Abhishek Rawat1, M.J. Nigam2
1,2

Department of Electronics and Communication Engineering,


1,2
Indian Institute of Technology, Roorkee, India

ABSTRACT
In this paper variable memory schemes are proposed to
improve the response of traditional CMAC.
Considering the stabilization of a system, there is
a tradeoff between the weight space of CMAC and
peak overshoots. This tradeoff can be resolved by
varying the weight space according to the error.
Weight space can be varied either by varying the
quantization or by varying the generalization of
CMAC. Increase in quantization is accompanied by the
increase in accuracy of the network but at the cost
o f e n o r m o u s w e i g h t s p a c e . Decrease i n
generalization results in reduced learning interference
and hence fast convergence to error but again at the
cost of significant increment in weight space. Thus
when the error level goes beyond the permissible error
range, first scheme namely FGVQ employs the use of
increase in quantization
while
second
scheme
namely FQVG suggests the decrease in generalization.
Simulation results which are done in MATLAB shows
that variable memory schemes have better results than
that of traditional CMAC.

quite clear that increase in generalization tends to


decrease the meaning of particular input but at the same
time it is responsible for the decrease in the weight
space of CMAC. So both the quantization and
generalization plays an important role in decreasing the
memory space of CMAC but there is always a
tradeoff between the convergence to zero error and
weight space of CMAC. To resolve this tradeoff two
variable memory schemes are proposed here to improve
the response of the system, most importantly the peak
overshoots if any. First method varies the memory of
CMAC by changing generalization while keeping the
quantization of CMAC constant while second method
performs the same by varying quantization.

Keywords Cerebellar model articulation controller,


Fixed Quantization Variable Generalization CMAC,
Fixed Generalization Variable Quantization CMAC.

I.

INTRODUCTION

Fig. 1. Cerebellar Model Articulation Controller

The CMAC known as Cerebellar Model


Articulation/Arithmetic Controller was proposed by
J.Albus in 1975[1-2]. CMAC performs a multivariable
function approximation in a generalized look-up table
form. Due to its high learning speed and local
generalization it is used in variety of applications [3-7].
As discussed in the next section Quantization and
Generalization are the key parameters of CMAC.
Quantization represents the resolution of the CMAC
network so at first sight it looks reasonable to keep
its value as large as possible but subsequently it leads to
the enormous weight space or large memory of
CMAC. Generalization represents the number of
overlapping of a particular input which is the subset
of input space formed as a result of Quantization. It is

II. CEREBELLARMODEL ARTICULATION

CONTROLLER
CMAC is a learning structure which emulates the
human cerebellum. Its an associative neural network
[8-10] in which a small subset of the network influences
any instantaneous output and that subset is determined
by the input to the network as shown in Fig. 1. The
region of operation of inputs is quantized say Q inputs
i.e. the number of elements in a particular input is Q.
This quantization determines the resolution of the
network [11] and the shift positions of the overlapping
regions. If n inputs are presented to the network, then
total number of elements in input space is Qn which is
quite large. To reduce this memory, inputs presented are

www.ijsret.org

30

International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882,
Volume 3 Issue 1, April 2014

converted into hyper cubes or hyper rectangles. A


particular input vector is the sum of G nearby inputs i.e.
a particular input is the overlapping of G nearby inputs.
The number G is called the number of layer of CMAC
which is referred to as the generalization width of
CMAC. Thus Qn number of memory is converted into
A memory units such that A<<Qn. The outline of
CMAC algorithm is given below:
(i)Number of inputs = n
(ii)Number of elements for a particular input =Q,
which is also the number of quantized state for input.
(iii)Total memory = Qn
(iv)Number of layers of CMAC = G
(v)Number of hypercubes in ith layer = ki where
i=1, 2, 3..L
(vi)Total number of hypercube
, which
is the memory of CMAC.
When an input is encountered, L hyper cubes are
activated and output of the network is simply the sum
of the contents of that hypercube. That is if CMAC is
an approximator its basis function can be defined as
[11]

The memory content update rule is given by least


square mean as [1]

where lr is the learning rate of CMAC and e is the


network error.
In CMAC based controllers, a continuous input
which falls in the operating region is discretized into
fixed number of inputs by deciding quantization Q. For
n input space, the number of elements in CMAC input
space will become Qn which turns out to be the

enormous weight space.


Generalization in CMAC is the number of overlappings
or the sum of the number of ways a particular input in
input space represents. In other words it represents the
number of hypercubes activated. Obviously larger is the
number of hypercubes activated lesser is the accuracy of
the network. Larger values of activated hypercubes
mean that CMAC algorithm requires more steps to
converge or in online control system it requires more
time to converge. This delay in convergence arises due
to learning interference. Also from the memory update
rule it is easy to see that learning rate and generalization
together determines the rate of convergence. However,
reducing the number of activated hypercubes or
generalization tends to increase the weight space
significantly. Thus Quantization and Generalization
together determines the size of weight space. As shown
in the figure a particular input encountered is first
quantized to form an input space. After that
generalization further reduces the size of weight space
or memory of space which was otherwise the size of
input space which is very high.
Fig.2 and Fig.3 shows the variation in memory with
Generalization and Quantization respectively.
B. Variable Memory Schemes
To improve Unit step response of the system, it is
divided into two regions. Region 1 requires low
memory while Region 2 requires higher memory. At the
initial stage of the response, it is expected to go towards
the zero error position so low memory is required in this
region. As it goes beyond the permissible error of -2%
memory of CMAC is increased to increase the accuracy
of the network and hence helps in curbing the peak
overshoots of the single memory CMAC. One of the
main advantages of using these memory schemes is the
reduction in overshoots as well as smooth steady state.
Memory of CMAC can be varied in two ways: Varying
quantization fixing generalization and varying
generalization fixing quantization.

maximum memory of CMAC. The Generalization G


represents the number of ways a particular input in the
input space is composed of.
A.

Effect Of Quantization and Generalization


On CMAC
Quantization determines the resolution of the network
and hence it determines the accuracy of the network.
From the control system perspective, if the resolution
of the network is high it is more accurate and hence
tends to lesser values of overshoots with reasonable
settling time. However increase in Quantization leads to

www.ijsret.org

Fig 2. Variation in number of hyper cubes with


Generalization

31

International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882,
Volume 3 Issue 1, April 2014

Figure 5.Simulink diagram of CMAC


Control law for the state space equation given by
(8) can be written as,

Fig 3. Variation in number of hypercubes with


Quantization

Where yr is the input signal, y being the function to be


estimated online, k1and k2 is such that the polynomial
is Hurwitz. The Simulink diagram to
implement the CMAC is shown in Fig. 5.

IV. SIMULATION AND RESULTS


For simulation purpose, Magnetic Levitation plant which
is an unstable plant with transfer function as given by
eqn. 6 is used,
Fig 4. Variable Memory Scheme
III. CONTROL SCHEME

This plant can be written in state space form

To demonstrate the behavior of variable memory


simulations are done on unstable second order plant in
which CMAC is used in conjunction with feedback
control law to control the plant. Though CMAC has the
capability to learn nonlinear functions quickly, linear
plant is discussed here for simplicity. The plant can be
written in state space form as

It can also be written as

Fig. 6. Fixed generalization variable quantization

www.ijsret.org

32

International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882,
Volume 3 Issue 1, April 2014

Fig.7. Fixed Quantization Variable


Generalization
Since the plant is of second order, two inputs CMAC are
used. In the fig. 6 comparisons between the responses
of the Magnetic Levitation system are done. Parameters
of two inputs CMAC for these responses are:
Generalization=6
Learning rate=1000
TABLE 1: Effect of changing quantization
Quantization Memory
Peak Overshoot (%)
20

105

20.25

40

339

15.78

100

1839

14.10

Quantization=20
Learning rate=1000
TABLE II: Effect of changing Generalization
Peak Overshoot
Generalization Memory
(%)
162

13.71

105

20.23

88

23.88

TABLE III: Fixed Generalization Variable Quantization


variable memory scheme
Peak
Over
Settling
Lr
qtn Memory
Shoot
Time
(%)
(sec)
L
M
H
M

It is quite clear from the fig.6 and the table that increase
in quantization results in the decrease in peak overshoots
but at the expense of weight space.
In the fig. 7 comparisons between the responses of
magnetic levitation system is done for fixed values of
Quantization.

overshoots. Simulation result shown in fig. 7 which is


shown in tabular form in table 2 supports this logic.
Increase in generalization tends to relax the condition on
the size of memory space but at the same time increase in
peak overshoots so there is a compromise between the
selection of value of generalization and reasonable value
of peak overshoots.
The response of the system can be improved by applying
variable memory scheme as shown in fig.4 and fig. 8
shows the Fixed generalization Variable quantization
variable memory scheme which is compared with
fixed memory with quantization of 100 with different
learning rate. FGVQ scheme has the smooth response
with reduction in overshoot values. It is presented in
tabular form as shown below:
For Generalization = 6,

Increase in generalization means the increase in the


number of overlapping and hence the learning
interference. This interference causes the delay in
convergence to zero error and hence results in peak

100

20

105
13.65

1000

100

0.20

1839

LM: Low Memory Region


HM: High Memory Region
Lr: Learning rate
qtn : Quantization
It is seen that for quantization of 100 and learning
rate of 100 settling time is higher while for learning rate
of 1000 there are oscillations in the steady state. Variable
memory scheme take care of these drawbacks. Since
learning rate is responsible for the oscillations in the
steady state so in this variable memory scheme, learning
rate is also made low in the low memory region to make
sure that there are no oscillations in the steady state. It is
also seen from the simulation results that peak overshoot
also improves to 13.65% if comparing it from table 1 and
table 2 respectively. The memory of CMAC can also be
varied by varying generalization while maintaining
quantization. Fig. 9 presents the Fixed Quantization
Variable Generalization scheme in which the
Quantization is fixed at 20 while generalization is
changed to 3 and 9 for low memory region and high
memory region respectively.
For Quantization = 20,

www.ijsret.org

33

International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882,
Volume 3 Issue 1, April 2014

TABLE IV: Fixed Quantization Variable Generalization


variable memory scheme

LM: Low Memory Region


HM: High Memory Region
lr : Learning rate
gen : Generalization
The learning rate for this region is made
different for the reasons as mentioned for Fixed
Generalization Variable Generalization scheme.

In CMAC quantization and generalization together


decides the size of its weight space or memory. The
dependence of its size on these two key parameters is
illustrated in Fig.2 and Fig.3 respectively. Using
traditional CMAC doesnt always give desired response
so one has to modify the algorithm of traditional CMAC.
In this paper two variable memory schemes are
proposed to improve the response of system which are
based on the logic mentioned in Fig. 4.First scheme
namely FGVQ uses the variable quantization method
while second scheme namely FQVG uses the variable
generalization method. The quantization determines the
resolution of
the network while
generalization
determines the number of overlappings and hence the
learning interference. These two variable memory
schemes show the improvement in response as shown in
Fig.8 and Fig. 9 respectively.

REFERENCES

Fig. 8. Fixed generalization Variable quantization


variable memory scheme

Fig 9. Fixed Quantization Variable Generalization


variable memory scheme.

VI. CONCLUSION

www.ijsret.org

1. J. Albus. Data storage in the cerebellar


model articulation controller (CMAC).
Trans.ASME J. Dynamic Systems Measurement
and Control, vol 97, pp. 228-233, 1975.
2. J.Albus. A new approach to manipulator control:
The
cerebellar
model
articulation
controller(CMAC).Trans. ASME J. Dynamic
Systems Measurement and Control,vol 97,
pp. 220-227, 1975.
3. W. Thomas Miller, Filson H. Glanz and L.
Gordon Kraft, CMAC: An Associative
Neural
Network Alternative to Back
propagation,in
Proceedings .IEEE, vol
78,pp.1561-1567,1990.
4. W. Thomas Miller and Christopher M.
Aldrich, Rapid Learning Using CMAC
Neural Networks: Real Time Control of an
Unstable System, in Proc. IEEE Intelligent
Control Symposium, Phila., PA, 1990, pp.
465-470.
5. M.F.Yeh and Chang Hung Tsai, Standalone
CMAC control systems with On-line learning
ability, IEEE Transactions on Systems, Man
and Cybernetics-Part B, Cybernetics, vol 40,
pp. 43-53, Feb 2010.
6. W.T. Miller, F. Glantz, and G. Kraft, RealTime dynamic control of an industrial
manipulator using a neural network based
learning controller, IEEE Transactions on
Robotics and Automation,vol 6, pp. 1-9,
1990.
7. W.T. Miller, Sensor based control of
Robotic Manipulators using a general
learning algorithm, IEEE Journal of
Robotics and Automation, vol 3, pp. 157-

34

International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882,
Volume 3 Issue 1, April 2014

165, April, 1987.


8. S. Lane, D. Handelman, and J. Gelfand.
Theory and development of higher order
CMAC neural networks,IEEE Control
Systems Magazine, pp. 23-30, 1992.
9. P. An, W. Miller and P. Parks, Design
improvements in associative memories for
Cerebellar model articulation controllers
(CMAC),in International Conference on
Artificial Neural Networks, pp. 12071210,1991.
10. J. Chen, An adaptive robust CMAC
Controller for nonlinear Systems, IEEE
Transactions, 2007
11. J. A. Farrel and M. M. Polycarpau, Adaptive
Approximation Based Control: Unifying
Neural, Fuzzy and Traditional Adaptive
Approximation Approaches, Wiley Interscience,
pp. 87-93, 2006.

www.ijsret.org

35

Potrebbero piacerti anche