Sei sulla pagina 1di 7

Complex-valued Bidirectional Auto-Associative Memory

Yozo Suzuki and Masaki Kobayashi


AbstractComplex-valued Hopeld Associative Memory
(CHAM) can store multi-valued patterns. But CHAM stores not
only given training patterns but also many spurious patterns,
such as their rotated patterns, at the same time. These rotated
patterns and spurious patterns reduce the noise robustness
of the CHAM. In the present work, we propose Complex-
valued Bidirectional Auto-Associative Memory (CBAAM) as a
model of auto-associative memory which improves the noise
robustness. CBAAM consists of two layers. Although the struc-
ture of CBAAM is a Bidirectional Associative Memory (BAM),
CBAAM works as an auto-associative memory, because the one
layer is a visible layer and the other one is an invisible layer.
The visible layer consists of complex-valued neurons and can
process multi-valued patterns. The invisible layer consists of
real-valued neurons and can reduce pseudo-memory such as
rotated patterns. Thus, CBAAM has strong noise robustness.
In the computer simulations, we show that the noise robustness
of CBAAM highly exceeds that of CHAM. Especially, we nd
that CBAAM maintains high noise robustness independent of
the resolution factor.
I. INTRODUCTION
F
OR recent years, articial neural networks have been
studied for exible information processing. An associa-
tive memory is a subject for study in this eld. In the past,
Hopeld [1],[2] has proposed Hopeld Associative Memory
(HAM) as a model of auto-associative memory. HAM has
some problems. One of these problems is that HAM cannot
deal with the multi-valued patterns.
Complex-valued Hopeld Associative Memory (CHAM)
was proposed as an advanced model of HAM by Aizenberg
et al.[3], Noest[4], [5] and Jankowski et al.[6]. CHAM
can deal with the multi-valued patterns unlike HAM. Thus,
CHAM is often applied to storing gray-scale images ( Aoki
and Kosugi[7], Aoki et al.[8], Muezzinoglu et al.[9]). Some
researchers have proposed advanced learning algorithm for
CHAM in order to improve its storage capacity and noise
robustness. Aoki et al. [8] and Lee [10] proposed projection
rule for CHAM. Lee [11] and Kobayashi et al. [12] proposed
gradient descent learning algorithm. Muezzinoglu et al.[9]
and Kobayashi [13] proposed a learning algorithm by solving
systems of linear inequalities.
CHAM stores not only training patterns but also their
rotated patterns. This is referred to as rotation invariance
(Zemel et al.[14]). The rotated patterns are typical spurious
patterns and K 1 rotated patterns exist for each training
pattern in case of K quantication. Mixture patterns are
secondary typical spurious patterns [15]. Mixture patterns are
Yozo Suzuki and Masaki Kobayashi are with Interdisciplinary
Graduate School of Medicine and Engineering, University of Ya-
manashi, 4-3-11, Takeda, Kofu, Yamanashi 400-8511, Japan (email: k-
masaki@yamanashi.ac.jp)
combinations of stable rotated patterns. Hence, CHAM con-
tains an enormous number of spurious patterns. Thus, these
rotated patterns reduce the noise robustness of the CHAM.
It is promising for the improvement of noise robustness to
avoid stable rotated patterns [16]-[23].
Kosko[24][25] has proposed Bidirectional Associative
Memory (BAM). A BAM consists of two layers and re-
alizes mutual association. Moreover, a BAM realizes high
parallelism, because the neurons in the same layer are
independent. BAM has been extended to Complex-valued
BAM (CBAM) to process multi-valued patterns[26].
In this paper, we proposed Complex-valued Bidirectional
Auto-Associative Memory (CBAAM). A CBAAM is an
auto-associative memory model whose structure is that of
BAM. Our proposed model consists of a visible layer and
an invisible layer. The visible layer consists of complex-
valued neurons and can process multi-valued patterns. The
invisible layer consists of real-valued neurons and can reduce
spurious memory such as rotated patterns. Therefore, our
proposed model can process multi-valued patterns and has
high noise robustness. In the computer simulations, we show
that the noise robustness of CBAAM highly exceeds that
of CHAM. Especially, we nd that CBAAM maintains high
noise robustness independent of the resolution factor.
The rest of the present paper is organized as follows:
Section II-IV briey describes HAM, CHAM and CBAM;
In section V, we describe our proposed model CBAAM;
Section VI provides computer simulation; In section VII, we
discuss the computer simulation results; Finally, we conclude
in section VIII.
II. HOPFIELD ASSOCIATIVE MEMORY
In this section, we briey describe Hopeld Associative
Memory (HAM). First, we dene the neuron of HAM. The
neuron takes one of two value, 1 or 1. Let a real number
S and a function f() be an input value and the activation
function. The state x of neuron is dened as follows:
x = f(S) (1)
f(S) =

1 (S 0)
1 (S < 0)
(2)
Next, we construct HAM. HAM is an associative memory
with symmetric full connections (Fig.1). HAM does not
have self-feedback. Let a real number w
ji
be the connec-
tion weight from the neuron i to the neuron j. Then, the
connection weights have to satisfy
w
ji
= w
ij
(3)
Fig. 1. Hopeld Associative Memory (number of neurons is 4)
x x x
Input
Output
Fig. 2. Recall process of HAM. HAM recalls a training pattern.
for i = j. This requirement ensures that HAM reaches a
stable state. Let x
i
be the state of the neuron i. Then the
weighted sum input I
j
to the neuron j is given as follows:
I
j
=

i=j
w
ji
x
i
. (4)
Finally, we describe recall process of HAM. All neurons
are connected to each other, it is hard to update multiple
neurons simultaneously. So we have to update neurons it-
eratively. The procedure of recall is given by the following
steps.
1) An input pattern is given to HAM.
2) Update all neurons iteratively.
3) If HAM is unchanged, recall process is completed.
Otherwise go to 2).
The recall process is illustrated in Fig. 2. Suppose that,
for a training pattern x, a pattern x

, which is x with noise,


is given to HAM. First, all neurons are updated iteratively.
And, HAM is updated until HAM become stable. Finally, we
obtain the training pattern pair x. Moreover, we can remove
the noise of the initial given pattern.
III. COMPLEX-VALUED HOPFIELD ASSOCIATIVE
MEMORY
A. Complex-valued neurons
In this section, we describe Complex-valued Hopeld
Associative Memory (CHAM), which is a complex-valued
extension of HAM. Complex-valued neurons input and out-
put signals are complex numbers. And the state of complex-
valued neuron is K-valued on the complex unit circle, where
s s
s
s
1
2
3
0
Re
Im
Fig. 3. States of neuron (K = 4)
K is the resolution factor and an integer greater than two.
It divides the complex unit circle into K sectors. Let a real
number
K
and complex numbers s
k
(k = 0, , K 1)be
as follows:

K
=

K
, (5)
s
k
= exp(

1(2k + 1)
K
). (6)
The states of complex-valued neurons belong to the set {s
k
}.
Figure 3 shows correspondence between the number of state
and complex-value in case of K = 4.
A complex-valued neuron receives the weighted sum input
from all the other neurons. Then it selects a new state for
the weighted sum input by following the activation function.
In the present work, we use the following activation function
f():
f(x) =

s
0
0 arg(x) < 2
K
s
1
2
K
arg(x) < 4
K
s
2
4
K
arg(x) < 6
K
.
.
.
s
K1
2(K 1)
K
arg(x) < 2K
K
(7)
where arg(x) is the argument of the complex number x.
Therefore, f(x) maximizes Re( s
k
x), where Re(x) and x are
the real part and the complex conjugate of x, respectively.
B. Complex-valued Hopeld Associative Memory (CHAM)
Let a complex number w
ji
be the connection weight from
the neuron i to the neuron j. Then the connection weight
w
ji
needs to satisfy the following requirement:
w
ji
= w

ij
. (8)
This requirement ensures that CHAM reaches a stable state.
/ 2 rotated
pattern
rotated
pattern
3 / 2 rotated
pattern
training
pattern
Fig. 4. Rotated patterns of a training pattern
Let x
i
be the state of the neuron i. The weighted sum input
I
j
that the neuron j receives from all the other neurons is
dened as follows:
I
j
=

i
w
ji
x
i
. (9)
We describe two typical learning algorithms for CHAM,
the complex-valued hebbian learning and the generalized
inverse matrix learning. We denote the pth training pattern
vector by x
p
= (x
p
1
, x
p
2
, , x
p
N
)
T
(p = 1, 2, , P),
where P and N are the numbers of the training patterns
and the neurons, respectively. The superscript T means the
transpose matrix. The complex-valued hebbian learning is
the simplest learning algorithm but the storage capacity and
noise robustness is extremely low. The connection weight w
ji
is given by w
ji
=

p
x
p
j
x
p
i
. The complex-valued hebbian
learning denitely satises the requirement (8). Next, we
describe the generalized inverse matrix learning, which is an
advanced learning algorithm and have high storage capac-
ity and noise robustness. Moreover, we denote the weight
connection matrix by W, whose (i, j) component is w
ij
.
Consider NP training matrix X = (x
1
, x
2
, , x
P
). Then
the weight connection matrix W is given as follows:
W = X(X

X)
1
X

, (10)
where the superscript means the adjoint matrix. The matrix
(X

X)
1
X

is called the generalized inverse matrix of X.


C. Rotated patterns in CHAM
Rotated patterns are strictly related to noise robustness of
CHAM. For a training pattern x = (x
1
, x
2
, , x
N
), the
patterns s
k
x = (s
k
x
1
, s
k
x
2
, , s
k
x
N
) (k = 1, 2, , K
1) are referred to as its rotated patterns. Therefore, the rotated
patterns are obtained by rotating the states of all neurons by
2k
K
. In case of K=4 and N=4, the training pattern of Fig.
4 has the three rotated patterns shown in Fig. 4.
X-Layer
Y-Layer
Fig. 5. Bidirectional Associative Memory
Suppose that a training pattern x is stable. Then the
following equation holds for each j:
f(

i=j
w
ji
x
i
) = x
j
. (11)
For a rotated pattern s
k
x, the following equation holds:
f(

i=j
w
ji
s
k
z
i
) = s
k
f(

i=j
w
ji
x
i
) = s
k
x
j
. (12)
This implies that the rotated patterns s
k
x are also stable.
Therefore, a training pattern has K1 stable rotated patterns.
When K is large, a training pattern x and the rotated patterns
s
1
x are near. This prevents CHAM from recalling the
correct training patterns.
IV. COMPLEX-VALUED BIDIRECTIONAL ASSOCIATIVE
MEMORY
A. Structure
BAM consists of two layers, X-Layer and Y-Layer. There
are connections between X-Layer and Y-Layer. There are not
connections in the same layer. Figure 5 shows the structure
of BAM. Two layers of BAM are independent unlike HAM.
Thus, BAM has concurrency in the process of calculating
weighted sum input to another layer.
If all neurons are complex-valued neurons, the BAM is
referred to as Complex-valued BAM (CBAM). In the rest of
this section, we consider only CBAM. Let the jth neurons of
X-Layer and Y-Layer be x
j
and y
j
, respectively. We denote
the state vectors of X-Layer and Y-Layer as follows:
x = (x
1
, x
2
, , x
M
)
T
, (13)
y = (y
1
, y
2
, , y
N
)
T
, (14)
where M and N are the numbers of neurons in X-Layer and
Y-Layer, respectively. Let w
Y X
ji
and w
XY
ij
be the connection
weight from the neuron j of X-Layer to the neuron j of
Y-Layer and the one from the neuron j of Y-Layer to the
neuron j of X-Layer. CBAM requires w
XY
ij
= w
Y X
ji
to
ensure convergence.
B. Learning Algorithm
Suppose that the training pattern pairs are given by
(x
1
, y
1
), (x
2
, y
2
), , (x
P
, y
P
), where P is the number of
training pattern pairs. We dene training pattern matrices as
follows:
X = (x
1
, x
2
, , x
P
), (15)
Y = (y
1
, y
2
, , y
P
). (16)
We denote the connection weight matrix from X-Layer
to Y-Layer and the ones from Y-Layer to X-Layer by W
Y X
and W
XY
, respectively. The (i, j) components of W
Y X
and
W
XY
are w
Y X
ij
and w
XY
ij
. Then, the requirement for CBAM
to ensure convergence is W
Y X
= W

XY
.
Complex-valued hebbian learning rule is given by
W
Y X
= YX

or w
Y X
ji
=

p
y
p
j
x
p
i
. Storage capacity and
noise robustness of complex-valued hebbain learning rule
is extremely low. Yano and Osana [27][28] proposed the
generalized inverse matrix learning for CBAM. Although this
learning algorithm does not satisfy the requirement W
Y X
=
W

XY
, it effectively works. The generalized inverse matrix
learning for CBAM is given as follows:
W
Y X
= Y(X

X)
1
X

, (17)
W
XY
= X(Y

Y)
1
Y

. (18)
Then we can easily get the following equations:
W
Y X
x
p
= y
p
, (19)
W
XY
y
p
= x
p
. (20)
Therefore, the training patterns are stable.
C. Recall Process
Given a training pattern with noise to X-Layer, BAM
removes the noise and associates Y-Layer corresponding to
X-Layer. The procedure of recall is given by the following
steps.
1) An input pattern is given to X-Layer.
2) Update Y-Layer.
3) Update X-Layer.
4) If X-Layer is unchanged, recall process is completed.
Otherwise go to 2).
The recall process is illustrated in Fig. 6. Suppose that,
for a training pattern pair (x, y), a pattern x

, which is x
with noise, is given to X-Layer. The initial state of Y-Layer
is arbitrary. First, Y-Layer is updated. Subsequently, X-Layer
is updated. X-Layer and Y-Layer are updated by turns until
both layers become stable. Finally, we obtain the training
pattern pair (x, y). Moreover, we can remove the noise of
the initial given pattern.
V. COMPLEX-VALUED BIDIRECTIONAL
AUTO-ASSOCIATIVE MEMORY
A. Structure
We describe our proposed model Complex-valued Bidi-
rectional Auto-Associative Memory (CBAAM). Although
x
?
x
y
x
y
x
y
Input
Output
Output
X-Layer
Y-Layer
Fig. 6. Recall process of BAM. BAM recalls a training pattern pair.
visible layer
invisible layer
Complex-valued neuron
Real-valued neuron
Fig. 7. Complex-valued Bidirectional Auto-Associative Memory
the structure of CBAAM is BAM, it works as an auto-
associative memory. CBAAM uses X-Layer and Y-Layer as
the visible layer and the invisible layer, respectively. We
give an initial pattern to the visible layer and obtain the
nal pattern from the visible layer. Then, we can expect to
obtain a training pattern without noise. We show CBAAM in
Fig. 7. The visible layer consists of complex-valued neurons.
So CBAAM can process multi-state patterns. The invisible
layer consists of real-valued neurons. Therefore, CBAAM
can avoid storing rotated patterns and is expected to improve
the noise robustness. The connection weights are complex
numbers. The neurons of the invisible layer are real-valued
neurons. We can regard real-valued neurons as complex-
valued neurons in case of K = 2. Then, real-valued neurons
ignore the imaginary parts of input signals.
B. Learning Algorithm
Since CBAAM is an auto-associative memory, the train-
ing patterns are not pattern pairs. Suppose that the train-
ing patterns are given by x
1
, x
2
, , x
P
. We randomly
generate the patterns of the invisible layer corresponding
to the training patterns. We denote the generated patterns
by y
1
, y
2
, , y
P
. Then we obtain the training patterns
(x
1
, y
1
), (x
2
, y
2
), , (x
P
, y
P
), for CBAAM. Therefore,
the training pattern matrices are as follows:
X = (x
1
, x
2
, , x
P
), (21)
Y = (y
1
, y
2
, , y
P
), (22)
We can use complex-valued hebbian learning rule for
CBAAM. We have to compare CHAM and CBAAM. The
x
?
x
y
x
y
x
y
Input Output
visible layer
invisible layer
Fig. 8. Recall process of CBAAM. CBAAM recalls a training pattern from
the visible layer, ignoring the pattern in the invisible layer.
storage capacity and noise robustness of complex-valued
hebbian learning rule is extremely low in both cases. There-
fore, complex-valued hebbain learning rule is not adequate
for comparison. Thus, we adopt the generalized inverse
matrix learning, nevertheless it does not ensure convergence
theoretically. By the generalized inverse matrix learning, we
obtain the following connection weight matrices:
W
Y X
= Y(X

X)
1
X

, (23)
W
XY
= X(Y
T
Y)
1
Y
T
. (24)
C. Recall Process
Given a training pattern with noise to the visible layer,
CBAAM removes the noise and provides the original training
pattern in the visible layer. The procedure of recall is given
by the following steps.
1) An input pattern is given to the visible layer.
2) Update the invisible layer.
3) Update the visible layer.
4) If the visible layer is unchanged, recall process is
completed. Otherwise go to 2).
The recall process is illustrated in Fig. 8. Suppose that,
for a training pattern x, a pattern x

, which is x with noise,


is given to X-Layer. The initial state of Y-Layer is arbitrary.
First, Y-Layer is updated. Subsequently, X-Layer is updated.
X-Layer and Y-Layer are updated by turns until both layers
become stable. Finally, we obtain the training pattern x in
the visible layer.
D. Rotated patterns
We describe why the rotated patterns are not stable. Sup-
pose that x is a training pattern and y is the corresponding
pattern in the invisible layer. Then, the relations W
Y X
x = y
and W
XY
y = x hold. Moreover, suppose that the state of
the visible layer is a rotated pattern e

1
x of x. Then, the
invisible layer receives the following weighted sum input I
Y
.
I
Y
= W
Y X
(e

1
x) (25)
= e

1
W
Y X
x (26)
= e

1
y (27)
Since the invisible layer ignores the imaginary part, it re-
ceives (cos )y. If

2
< <

2
, the invisible layer recalls
the pattern y. Thus, the visible layer recalls x. We nd that
CBAAM recalls the training pattern for the rotated pattern.
VI. COMPUTER SIMULATIONS
In this section, we conrm the noise robustness of
CBAAM exceeds that of CHAM by computer simulation.
The simulation has been carried out under the conditions
M = N = 100, K = 10, 20 and 30, P = 10, 30 and 50.
Also, the number of neurons of the hidden layer is 100. We
added the noise by the following procedure.
1) L neurons were randomly selected.
2) The states of the selected L neurons were replaced with
randomly generated states.
L is referred to as noise level. If CHAM and CBAAM
could restore the original pattern, the trial was regarded as
successful.
In each condition, 100 training data sets were generated
at random. For each training data set and each noise level L,
100 trials were carried out by the following procedure.
1) A training pattern was selected at random and put on
the CHAM and CBAAM.
2) Noise was added and CHAM and CBAAM recalled.
Figure 9 shows the simulation results. The horizontal axis
shows the noise level. The vertical axis shows the successful
rate. In this case, the success is that all of the noise is
removed from the leraning pattern with L noise neurons and
is restored to the correct learning pattern. The successful
rate is obtained by counting how many times each learning
patterns with the L noise neurons is restored with 100 trials.
In the simulation results, we nd that noise robustness of
CBAAM exceeded that of CHAM in every result that was
looked out.
VII. DISCUSSION
We discuss the computer simulation results. Although
CBAAM by the generalized inverse matrix learning does
not always reach a stable state theoretically, all trials in our
computer simulation converged. Noise robustness of both
CHAM and CBAAM decreased as the number of training
patterns P increased. As the resolution factor K increased,
noise robustness of CHAM decreased while that of CBAAM
was unchanged. This implies that CHAM had many spurious
patterns around the training patterns but CBAAM did not.
Therefore, CBAAM exceeds CHAM especially in case that
the resolution factor K is large.
We have some problems to overcome. The generalized
inverse matrix learning for CBAAM does not satisfy the
requirement w
XY
ij
= w
Y X
ji
. It is necessary to develop
learning algorithms to satisfy such requirement. The patterns
of the invisible layer was randomly generated. It is also
needed to generate suitable patterns of the invisible layer.
VIII. CONCLUSION
In this paper, we proposed the CBAAM to improve the
noise robustness of complex-valued auto-associative memo-
ries. CBAAM consists of the complex-valued visible layer
and the real-valued invisible layer. The complex-valued
visible layer enables CBAAM to process multi-state data.
20
40
60
80
100
0 20 40 60 80
S
u
c
c
e
s
s

R
a
t
e
Noise Level
K=10 P=10
0
20
40
60
80
100
0 20 40 60 80
S
u
c
c
e
s
s

R
a
t
e
Noise Level
K=10 P=30
0
20
40
60
80
100
0 20 40 60 80
S
u
c
c
e
s
s

R
a
t
e
Noise Level
K=10 P=50
20
40
60
80
100
0 20 40 60 80
S
u
c
c
e
s
s

R
a
t
e
Noise Level
K=20 P=10
0
20
40
60
80
100
0 20 40 60 80
S
u
c
c
e
s
s

R
a
t
e
Noise Level
K=20 P=30
0
20
40
60
80
100
0 20 40 60 80
S
u
c
c
e
s
s

R
a
t
e
Noise Level
K=20 P=50
0
20
40
60
80
100
0 20 40 60 80
S
u
c
c
e
s
s

R
a
t
e
Noise Level
K=30 P=10
0
20
40
60
80
100
0 20 40 60 80
S
u
c
c
e
s
s

R
a
t
e
Noise Level
K=30 P=30
0
20
40
60
80
100
0 20 40 60 80
S
u
c
c
e
s
s

R
a
t
e
Noise Level
K=30 P=50
CHAM
CBAAM
Fig. 9. Results of computer simulations: horizontal axis and vertical axis indicate noise level and successful rate, respectively.
The improvement in noise robustness is due to the real-
valued invisible layer. Stable rotated patterns deteriorate
noise robustness. The invisible layer can make rotated pat-
terns unstable. By the computer simulations, we found that
noise robustness of CBAAM is much better than that of
CHAM. In addition, the noise robustness of CBAAM was
determined by the number of patterns. On the other hand,
the noise robustness of CHAM decreased as the resolution
factor increased. Especially, in case of large resolution factor,
CBAAM is effective for improvement in noise robustness.
Moreover, CBAAM has high concurrency from the in-
dependence of each layer unlike CHAM. If neurons of
each layer increase, CBAAM can simultaneously calculate
weighted sum input and is expected to process more quickly
than CHAM.
In the future, we have to develop a new learning algorithm
to solve the following problems.
1) A new learning algorithm has to satisfy w
XY
ij
= w
Y X
ji
in order to ensure that CBAAM always reaches a stable
state.
2) A new learning algorithm has to provide suitable
patterns of the invisible layer.
REFERENCES
[1] J. J. Hopeld, Neural networks and physical systems with emer-
gent collective computational abilities, Proceedings of the National
Academy of Sciences of the United States of America, vol.79, no.8,
pp.2554-2558, 1982.
[2] J. J. Hopeld, Neurons with graded response have collective compu-
tational properties like those of two-state neurons, Proceedings of the
National Academy of Sciences of the United States of America, vol.81,
no.10, pp.3088-3092, 1984.
[3] I. N. Aizenberg, N. N. Aizenberg and J. Vandewalle, Multi-valued and
universal binary neurons - theory, learning and application, Kluwer
Academic Publishers, Boston, 2000.
[4] A. J. Noest: Phasor neural networks, Neural Information Processing
Systems, ed. D. Z. Anderson, pp.584-591, AIP, New York, 1988
[5] A. J. Noest: Discrete-state phasor neural networks, Physical Review
A, vol.38, no.4, pp.2196-2199, 1988
[6] S. Jnakowski, A. Lozowski and J. M. Zurada: Complex-valued multi-
state neural associative memory, IEEE Trans. Neural Networks, Vol.7,
No.6, pp.1491-1496, 1996
[7] H. Aoki and Y. Kosugi, An image storage system using complex-
valued associative memory, Proceedings of the International Confer-
ence on Pattern Recognition, vol.2, pp.626-629, 2000.
[8] H. Aoki, M. R. Azimi-Sadjadi and Y. Kosugi, Image association using
a complex-valued associative memory model, IEICE TRANSACTIONS
on Fundamentals of Electronics, Communications and Computer Sci-
ences, vol.E83-A, pp.1824-1832, 2000.
[9] M. K. Muezzinoglu, C. S. Guzelis and J. M. Zurada, A new design
method for the complex-valued multistate Hopeld associative mem-
ory, IEEE Transaction on Neural Networks, vol.14, no.4, pp.891-899,
2003.
[10] D. L. Lee, Improvements of complex-valued Hopeld associative
memory by using generalized projection rules, IEEE Transaction on
Neural Networks, vol.17, no.5, pp.1341-1347, 2006.
[11] D. L. Lee, Improving the capacity of complex-valued neural networks
with a modied gradient descent learning rule, IEEE Transaction on
Neural Networks, vol.12, no.2, pp.439-443, 2001.
[12] M. Kobayashi, H. Yamada and M. Kitahara, Noise robust gradi-
ent descent learning for complex-valued associative memory, IEICE
Transactions on Fundamentals of Electronics, Communications and
Computer Science, vol.E94-A, no.8, pp.1756-1759, 2011.
[13] M. Kobayashi, Pseudo-relaxation learning algorithm for complex-
valued associative memory, International Journal of Neural Systems,
vol.18, no.2, pp.147-156, 2008..
[14] R. S. Zemel, C. K. I. Williams and M. C. Mozer, Lending direction
to neural networks, Neural Networks, vol.8, no.4, pp.503-512, 1995.
[15] J. Hertz, A Krogh and R. G. Palmer, Introduction to the theory of
neural computation, Santa Fe Institute Series, vol.1, USA, Perseus
Books, 1991.
[16] M. Kitahara, M. Kobayashi and M. Hattori, Chaotic rotor associative
memory, Proceedings of International Symposium on Nonlinear Theory
and its Applications, pp.399-402, 2009.
[17] M. Kitahara and M. Kobayashi, Fundamental abilities of rotor asso-
ciative memory, Proceedings of 9th IEEE/ACIS International Confer-
ence on Computer and Information Science, pp.497-502, 2010.
[18] M. Kitahara and M. Kobayashi, Gradient descent learning for rotor
associative memory, IEEJ Transactions on Electronics, Information
and Systems, vol.131, no.1, pp.116-121, 2011 (in Japanese).
[19] M. Kitahara, M. Kobayashi and M. Hattori, Reducing spurious
states by rotor associative memory, IEEJ Transactions on Electronics,
Information and Systems, vol.131, no.1, pp.109-115, 2011 (in Japanese).
[20] M. Kitahara and M. Kobayashi, Complex-valued Associative Memory
with Strong Thresholds, Proceedings of International Symposium on
Nonlinear Theory and its Applications, pp.362-365 , 2011.
[21] M. Kitahara and M. Kobayashi, Projection rules for complex-valued
associative memory with large constant terms, Nonlinear Theory and
Its Applications, vol.3, no.3, pp.426-435, 2012.
[22] Y. Suzuki, M. Kitahara and M. Kobayashi, Dynamic complex-
valued associative memory with strong bias terms, Proceedings of
International Conference on Neural Information Processing, pp.509-
518, 2011.
[23] Y. Suzuki, M. Kitahara and M. Kobayashi, Rotor associative mem-
ory with a periodic activation function, Proceedings of IEEE World
Congress on Computational Intelligence, pp.720-727, 2012.
[24] B. Kosko, Adaptive bidirectional associative memories, Applied
Optics, vol. 26, no. 23, pp. 4947-4960, 1987.
[25] B. Kosko, Bidirectional associative memories, IEEE Transactions on
Systems Man and Cybernetics, vol. 18, no. 1, pp. 49-60, 1988.
[26] D. L. Lee, A multivalued bidirectional associative memory operating
on a complex domain, Neural Networks, vol. 11, no. 9, pp. 1623-1635,
1998.
[27] Y. Yano and Y. Osana, Chaotic complex-valued bidirectional asso-
ciative memory, Proceedings of IEEE and INNS International Joint
Conference on Neural Networks, pp.3444-3449, 2009.
[28] Y. Yano and Y. Osana, Chaotic complex-valued bidirectional asso-
ciative memory one-to-many association ability , Proceedings of
International Symposium on Nonlinear Theory and its Applications,
pp.1285-1292, 2009.

Potrebbero piacerti anche