Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
matrix must be symmetric. This condition made those fast neural matrix. In [3,4], the same principle was used for fast detecting
networks slower than conventional neural networks. Another a certain code/data in a given one dimensional matrix
symmetric form for the input matrix was introduced in [1-9] to speed (sequential data). This was done by converting the input
up the operation of these fast neural networks. Here, corrections for matrices into symmetric forms. In this paper, corrections for
the cross correlation equations (given in [13,15,16]) to compensate for the errors in cross correlation equations introduced in
the symmetry condition are presented. After these corrections, it is [13,15,16] are presented. Theoretical and practical results after
proved mathematically that the number of computation steps required these corrections prove that our proposed fast neural algorithm
for fast neural networks is less than that needed by classical neural
networks. Furthermore, there is no need for converting the input data
is faster than the previous algorithms as well as classical
into symmetric form. Moreover, such new idea is applied to increase neural networks. In section II, fast neural networks for
the speed of neural networks in case of processing complex values. code/data detection are described. The correct fast neural
Simulation results after these corrections using MATLAB confirm the algorithm for detecting a certain code/data in a given one
theoretical computations. dimensional sequential data is presented in section III. This
algorithm can be applied for communication applications.
Here, such algorithm is used for increasing the speed of neural
Keywords—Fast Code/Data Detection, Neural Networks, Cross networks dealing with complex values. The new fast neural
Correlation, real/complex values.
networks with real/complex successive input values will be
presented in section IV.
I. INTRODUCTION
Manuscript received October 21, 2004. The convolution theorem in mathematical analysis says that
H. M. El-Bakry, is assistant lecturer with Faculty of Computer Science and a convolution of f with h is identical to the result of the
Information Systems – Mansoura University – Egypt. Now, he is PhD student
in University of Aizu, Aizu Wakamatsu City, Japan 965-8580 (phone: following steps: let F and H be the results of the Fourier
+81-242-37-2760, fax: +81-242-37-2743, e-mail: d8071106@u-aizu.ac.jp). Transformation of f and h in the frequency domain. Multiply F
Q. Zhao is professor with the Information Systems Department, University and H in the frequency domain point by point and then
of Aizu, Japan (e-mail: qf-zhao@u-aizu.ac.jp). transform this product into the spatial domain via the inverse
International Scholarly and Scientific Research & Innovation 2(10) 2008 3586 ISNI:0000000091950263
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:2, No:10, 2008
hidden neuron (i). Equation 1 represents the output of each computed. Therefore, q backward and (1+q) forward
hidden neuron for a particular sub-matrix I. It can be obtained transforms have to be computed. Therefore, for a given
to the whole matrix Z as follows: matrix under test, the total number of operations required
to compute the 1D-FFT is (2q+1)Nlog2N.
Therefore, Eq.2 can be written as follows [1]: which may be simplified to:
(
h i = g Z ⊗ Xi + bi ) (4)
ρ=5(Nlog2N) (8)
International Scholarly and Scientific Research & Innovation 2(10) 2008 3587 ISNI:0000000091950263
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:2, No:10, 2008
for all neurons. Also, a number of real computation steps By substituting in Eq.9 for the new dimensions, the number of
equal to N is required to create butterflies complex computation steps required for cross correlating this new
numbers (e-jk(2Πn/N)), where 0<K<L. These (N/2) complex matrix with the weights in the frequency domain can be
numbers are multiplied by the elements of the input matrix calculated as follows [3,4]:-
or by previous complex numbers during the computation
of FFT. To create a complex number requires two real
floating point operations. Thus, the total number of
computation steps required for fast neural networks σ=((2q+1)(5(2Nlog22N))+6q(2N)+q(2N-n)+q(2N)
becomes:
+2N) (14)
σ=((2q+1)(5Nlog2N) +6qN+q(N-n)+qN+N) (9)
which can be simplified to:
6- Using sliding window of size 1xn for the same matrix of So, the speed up ratio in this case can be calculated as [3,4]:
1xN pixels, (q(2n-1)(N-n+1)) computation steps are
required when using classical neural networks for certain
code detection. The theoretical speed up factor η can be
evaluated as follows [11]: q(2n -1)(N- n + 1)
η= (16)
(2q + 1)(10Nlog2 2N) + q(16N- n) + 2N
q(2n - 1)(N - n + 1)
η= (11)
(2q + 1)(5Nlog 2 N) + q(8N - n) + N The theoretical speed up ratio in this case with different
sizes of the input matrix and different in size weight matrices
is shown in Fig.1. Also, practical speed up ratio for
7- But as proved in [10], this cross correlation in the frequency manipulating matrices of different sizes and different in size
domain (Fast Neural Networks) gives the same correct weight matrices is shown in Fig.2 using 700 MHz processor
results as conventional cross correlation (Conventional and MATLAB.
Neural Networks) only when the input matrices are
symmetric. There are critical errors in cross correlation equations
presented in [13,15,16]. Eq.3 and Eq.4 (which is Eq.4 in [16]
and also Eq.13 in [15] but in two dimensions) are not correct.
Eq.3 is not correct because the definition of cross correlation
II. FAST NEURAL NETWORKS FOR DETECTING A CERTAIN is:
CODE IN A STREAM OF ONE DIMENSIONAL SERIAL DATA
In [3,4], another symmetric form for the input one dimensional
matrix so that fast neural networks can give the same correct ⎛ ∞ ⎞
results as conventional neural networks. The input matrix is f(x)⊗ d(x) = ⎜⎜ ∑f(x + n)d(n)⎟⎟ (17)
converted into symmetric form by obtaining its mirror and ⎝ n= − ∞ ⎠
testing both the matrix and its mirror version as a one
(symmetric) matrix consists of two matrices. In this case, the
symmetric matrix will have dimensions of (1x2N). Assume
and then Eq.4 must be written as follows:
that the original input matrix (1xN dimensions) Xo is:
i
(
h = g Z⊗X +b
i i
) (18)
International Scholarly and Scientific Research & Innovation 2(10) 2008 3588 ISNI:0000000091950263
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:2, No:10, 2008
[13,15,16] to think about how to modify the operation of cross and obtaining the resulted two matrices. After performing dot
correlation so that Eq.4 can give the same correct results as multiplication for the resulted two matrices in the frequency
conventional neural networks. Therefore, the errors in these domain, the Inverse Fast Fourier Transform is calculated for
equations must be cleared to all the researchers. In [1-10], the the final matrix. As the Fast Fourier Transform is already
authors proved that a symmetry condition must be found in dealing with complex numbers, so there is no change in the
input matrices (the input matrix under test and the weights of number of computation steps required by fast neural networks.
neural networks) so that fast neural networks can give the Therefore, the speed up ratio in case of fast neural networks
same results as conventional neural networks. In case of dealing with different types of inputs can be evaluated as
symmetry W⊗Ψ=Ψ⊗W, the cross correlation becomes follows:
commutative and this is a valuable achievement. In this case,
the cross correlation is performed without any constrains on
the arrangement of the input matrices.
1) In case of real inputs and complex weights
The correct theoretical speed up ratio, given by Eq.11, with
different sizes of the input matrix and different in size weight A) For one dimensional input matrix
matrices is listed in Fig.3. Practical speed up ratio for Multiplication of (n) complex-valued weights by (n) real
manipulating matrices of different sizes and different in size inputs requires (2n) real operations. This produces (n) real
weight matrices is listed in Fig. 4 using 700 MHz processor numbers and (n) imaginary numbers. The addition of these
and MATLAB ver 5.3. In this case, both theoretical and numbers requires (2n-2) real operations. Therefore, the
Open Science Index, Computer and Information Engineering Vol:2, No:10, 2008 waset.org/Publication/12198
practical results show that speed up ratios are faster than the number of computation steps required by conventional neural
previous speed up ratios listed in Figures 1 and 2. For different networks can be calculated as:
lengths of the detected code (n), a comparison between the
numbers of computation steps required by fast and
conventional neural networks is shown in Figures 5, 6 and 7. It θ=2q(2n-1)(N-n+1) (19)
is clear that the number of computation steps required by
conventional neural networks is much more than that needed
by fast neural networks. Furthermore, as shown in these three The speed up ratio in this case can be computed as follows:
figures, the number of computation steps required by fast
neural networks is the same. This is because the length of the
code (n), which is required to be detected, does not affect the
number of computation steps required by fast neural networks 2q(2n- 1)(N- n + 1)
(Eq.10). Moreover, the number of computation steps required η= (20)
(2q + 1)(5Nlog2N) + q(8N- n) + N
by conventional neural networks is increased with the length
of the code (n). Thus, as shown in Figures 3 and 4, the speed
up ratio is increased with the length of the input matrix.
A comparison between the numbers of computation steps
required by conventional and fast neural networks with
IV. A NEW FAST COMPLEX-VALUED CODE DETECTION different code size is shown in Figures 8,9,10. The theoretical
ALGORITHM USING NEURAL NETWORKS speed up ratio for searching of a short successive code of
length (n) in a long input vector (L) using fast neural networks
Detection of complex values has many applications is shown in Fig.11. Also, practical speed up ratio for
especially in fields dealing with complex numbers such as manipulating matrices of different sizes (L) and different in
telecommunications, speech recognition and image processing size complex-valued weight matrices (n) is shown in Fig.12
with the Fourier Transform [17,18]. For neural networks, using 700 MHz processor and MATLAB. It is clear that the
processing complex values means that the inputs, weights, speed up ratio is proportionally increased with the size of the
thresholds and activation function have complex values. For code to be detected. This is very important for fast detecting
sequential data, neural networks accept serial input data with large size codes. Such result proves that the proposed
fixed size (n). Therefore, the number of input neurons equals algorithm is a good achievement.
to (n). Instead of treating (n) inputs, our new idea is to collect
the input data together in a long vector (for example 100xn).
Then the successive input data is tested by fast neural B) For two dimensional input matrix
networks as a single pattern with length L (for example Multiplication of (n2) complex-valued weights by (n2) real
L=100xn). Such test is performed in the frequency domain as inputs requires (2n2) real operations. This produces (n2) real
described in section II. In this section, formulas for the speed numbers and (n2) imaginary numbers. The addition of these
up ratio with different types of inputs will be presented. The numbers requires (2n2-2) real operations. Therefore, the
special case when the imaginary part of the inputs=0 (i.e. real number of computation steps required by conventional neural
input values) is considered. Also, the speed up ratio in case of networks can be calculated as:
one and two dimensional input matrix will be concluded. The
operation of fast neural networks depends on computing the
Fast Fourier Transform for both the input and weight matrices
International Scholarly and Scientific Research & Innovation 2(10) 2008 3589 ISNI:0000000091950263
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:2, No:10, 2008
International Scholarly and Scientific Research & Innovation 2(10) 2008 3590 ISNI:0000000091950263
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:2, No:10, 2008
frequency domain separately using a single fast neural [12] H. M. El-Bakry, “Automatic Human Face Recognition Using Modular
Neural Networks,” Machine Graphics & Vision Journal (MG&V), vol.
processor. 10, no. 1, 2001, pp. 47-73.
[13] S. Ben-Yacoub, B. Fasel, and J. Luettin, “Fast Face Detection using
MLP and FFT,” Proc. of the Second International Conference on Audio
V. CONCLUSION and Video-based Biometric Person Authentication (AVBPA'99), 1999.
[14] H. A. Rowley, S. Baluja, and T. Kanade, “ Neural Network - Based Face
A fast neural algorithm for detecting a certain code/data in Detection,” IEEE Trans. on Pattern Analysis and Machine Intelligence,
a given sequential data has been presented. A new correct Vol. 20, No. 1, pp. 23-38, 1998.
formula for the speed up ratio has been established. [15] B. Fasel, “Fast Multi-Scale Face Detection,” IDIAP-Com 98-04, 1998.
[16] S. Ben-Yacoub, “Fast Object Detection using MLP and FFT,” IDIAP-
Furthermore, correct equations for one dimensional cross RR 11, IDIAP, 1997.
correlation in the spatial and frequency domains have been [17] A. Hirose, Complex-Valued Neural Networks Theories and
presented. Moreover, commutative cross correlation in one Applications, Series on innovative Intellegence, vol.5. Nov. 2003.
dimension has been achieved by converting the non- [18] S. Jankowski, A. Lozowski, and M. Zurada, “Complex-valued Multistate
Neural Associative Memory,” IEEE Trans. on Neural Networks, vol.7,
symmetric input matrices into symmetric forms. Theoretical 1996, pp.1491-1496.
computations after these corrections have shown that fast [19] H. M. El-Bakry, and Q. Zhao, “Fast Object/Face Detection Using
neural networks requires fewer computation steps than Neural Networks and Fast Fourier Transform,” International Journal on
conventional one. Simulation results have confirmed this Signal Processing, vol.1, no.3, pp. 182-187, 2004.
[20] H. M. El-Bakry, and Q. Zhao, “A Modified Cross Correlation in the
approval by using MATLAB. The presented fast neural Frequency Domain for Fast Pattern Detection Using Neural Networks,”
networks have been applied successfully to detect a small International Journal on Signal Processing, vol.1, no.3, pp. 188-194,
Open Science Index, Computer and Information Engineering Vol:2, No:10, 2008 waset.org/Publication/12198
International Scholarly and Scientific Research & Innovation 2(10) 2008 3591 ISNI:0000000091950263
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:2, No:10, 2008
Lebanese American University Beirut, Lebanon, June 25-29, 2001. Dr. Zhao received the Ph. D degree
He was invited for a talk in the Biometric Consortium, Orlando, from Tohoku University of Japan in
Florida, USA, 12-14 Sep. 2001, which co-sponsored by the United 1988. He joined the Department of
States National Security Agency (NSA) and the National Institute of Electronic Engineering of Beijing
Standards and Technology (NIST). Institute of Technology of China in
1988, first as a post doctoral fellow
and then associate professor. He was
associate professor from Oct. 1993 at
the Department of Electronic
Engineering of Tohoku University of
Japan. He joined the University of
Aizu of Japan from April 1995 as
associate professor, and became tenure
full professor in April 1999. Prof. Zhao research interests include
image processing, pattern recognition and understanding,
computational intelligence, neurocomputing and evolutionary
computation.
Open Science Index, Computer and Information Engineering Vol:2, No:10, 2008 waset.org/Publication/12198
6
Theoretical Speed up ratio (n=400)
5 Theoretical Speed up ratio (n=625)
Theoretical Speed up ratio (n=900)
Speed up Ratio
4
3
2
1
0
10000 2E+05 5E+05 1E+06 2E+06 3E+06 4E+06
Length of serial input data
Fig. 1 The theoretical speed up ratio in case of converting sequential data into symmetric form by mirroring the input matrix
International Scholarly and Scientific Research & Innovation 2(10) 2008 3592 ISNI:0000000091950263
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:2, No:10, 2008
9
8
7
Speed up Ratio
6
5
4
3
2 Practical Speed up ratio (n=400)
Practical Speed up ratio (n=625)
1 Practical Speed up ratio (n=900)
Open Science Index, Computer and Information Engineering Vol:2, No:10, 2008 waset.org/Publication/12198
0
10000 2E+05 5E+05 1E+06 2E+06 3E+06 4E+06
Length of serial input data
Fig. 2 Practical simulation results for speed up ratio in case of converting sequential data into symmetric form by mirroring the input matrix
14
Theoretical Speed up ratio (n=400)
12 Theoretical Speed up ratio (n=625)
Theoretical Speed up ratio (n=900)
10
Speed up Ratio
8
6
4
2
0
10000 2E+05 5E+05 1E+06 2E+06 3E+06 4E+06
Length of serial input data
Fig. 3 The theoretical speed up ratio for detecting a certain code (n) in a stream of serial data
International Scholarly and Scientific Research & Innovation 2(10) 2008 3593 ISNI:0000000091950263
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:2, No:10, 2008
18
16
14
Speed up Ratio
12
10
8
6
4 Practical Speed up ratio (n=400)
Practical Speed up ratio (n=625)
2 Practical Speed up ratio (n=900)
Open Science Index, Computer and Information Engineering Vol:2, No:10, 2008 waset.org/Publication/12198
0
10000 2E+05 5E+05 1E+06 2E+06 3E+06 4E+06
Length of serial input data
Fig. 4 Practical speed up ratio for detecting a certain code (n) in a stream of serial data
Number of Computation Steps
1.2E+11
Number of Computation Steps Required
1E+11 by Conventional Neural Networks
Number of Computation Steps Required
8E+10 by Fast Neural Networks
6E+10
4E+10
2E+10
0
10000 2E+05 5E+05 1E+06 2E+06 3E+06 4E+06
Length of one dimensional input matrix
Fig. 5 A comparison between the numbers of computation steps required by fast and conventional neural networks (n=400)
International Scholarly and Scientific Research & Innovation 2(10) 2008 3594 ISNI:0000000091950263
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:2, No:10, 2008
1.6E+11
1.4E+11 Number of Computation Steps Required
by Conventional Neural Networks
1.2E+11
Number of Computation Steps Required
1E+11 by Fast Neural Networks
8E+10
6E+10
4E+10
2E+10
Open Science Index, Computer and Information Engineering Vol:2, No:10, 2008 waset.org/Publication/12198
0
10000 2E+05 5E+05 1E+06 2E+06 3E+06 4E+06
Length of one dimensional input matrix
Fig. 6 A comparison between the numbers of computation steps required by fast and conventional neural networks (n=625)
Number of Computation Steps
2.5E+11
Number of Computation Steps Required
2E+11 by Conventional Neural Networks
Number of Computation Steps Required
by Fast Neural Networks
1.5E+11
1E+11
5E+10
0
10000 2E+05 5E+05 1E+06 2E+06 3E+06 4E+06
Length of one dimensional input matrix
Fig. 7 A comparison between the numbers of computation steps required by fast and conventional neural networks (n=900)
International Scholarly and Scientific Research & Innovation 2(10) 2008 3595 ISNI:0000000091950263
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:2, No:10, 2008
2.5E+11
Number of Computation Steps Required
2E+11 by Conventional Neural Networks
Number of Computation Steps Required
by Fast Neural Networks
1.5E+11
1E+11
5E+10
0
Open Science Index, Computer and Information Engineering Vol:2, No:10, 2008 waset.org/Publication/12198
Fig. 8 A comparison between the numbers of computation steps required by fast and conventional neural networks in case of real-valued one
dimensional input matrix and complex-valued weight matrix (n=400)
Number of Computation Steps
3.5E+11
3E+11 Number of Computation Steps Required
by Conventional Neural Networks
2.5E+11 Number of Computation Steps Required
by Fast Neural Networks
2E+11
1.5E+11
1E+11
5E+10
0
10000 2E+05 5E+05 1E+06 2E+06 3E+06 4E+06
Length of one dimensional input matrix
Fig. 9 A comparison between the numbers of computation steps required by fast and conventional neural networks in case of real-valued one
dimensional input matrix and complex-valued weight matrix (n=625)
International Scholarly and Scientific Research & Innovation 2(10) 2008 3596 ISNI:0000000091950263
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:2, No:10, 2008
5E+11
Number of Computation Steps
2E+11
1E+11
0
Open Science Index, Computer and Information Engineering Vol:2, No:10, 2008 waset.org/Publication/12198
Fig. 10 A comparison between the numbers of computation steps required by fast and conventional neural networks in case of real-valued one
dimensional input matrix and complex-valued weight matrix (n=900)
25
Theoretical Speed up ratio (n=400)
Theoretical Speed up ratio (n=625)
20 Theoretical Speed up ratio (n=900)
Speed up Ratio
15
10
0
10000 2E+05 5E+05 1E+06 2E+06 3E+06 4E+06
Length of one dimensional input matrix
Fig. 11 The relation between the speed up ratio and the length of one dimensional real-valued input matrix in case of complex-valued weights
International Scholarly and Scientific Research & Innovation 2(10) 2008 3597 ISNI:0000000091950263
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:2, No:10, 2008
40
35
30
Speed up Ratio
25
20
15
10 Practical Speed up ratio (n=400)
Practical Speed up ratio (n=625)
5 Practical Speed up ratio (n=900)
Open Science Index, Computer and Information Engineering Vol:2, No:10, 2008 waset.org/Publication/12198
0
10000 2E+05 5E+05 1E+06 2E+06 3E+06 4E+06
Length of one dimensional input matrix
Fig. 12 Practical speed up ratio in case of one dimensional real-valued input matrix and complex-valued weights
2E+11
Number of Computation Steps
2E+11
Number of Computation Steps Required
2E+11 by Conventional Neural Networks
1E+11 Number of Computation Steps Required
1E+11 by Fast Neural Networks
1E+11
8E+10
6E+10
4E+10
2E+10
0
100 300 500 700 900 1100 1300 1500 1700 1900
Fig. 13 A comparison between the numbers of computation steps required by fast and conventional neural networks in case of real-valued two
dimensional input matrix and complex-valued weight matrix (n=20)
International Scholarly and Scientific Research & Innovation 2(10) 2008 3598 ISNI:0000000091950263
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:2, No:10, 2008
3.5E+11
3E+11 Number of Computation Steps Required
by Conventional Neural Networks
2.5E+11 Number of Computation Steps Required
by Fast Neural Networks
2E+11
1.5E+11
1E+11
5E+10
0
Open Science Index, Computer and Information Engineering Vol:2, No:10, 2008 waset.org/Publication/12198
100 300 500 700 900 1100 1300 1500 1700 1900
Size of two dimensional input matrix
Fig. 14 A comparison between the numbers of computation steps required by fast and conventional neural networks in case of real-valued two
dimensional input matrix and complex-valued weight matrix (n=25)
Number of Computation Steps
4.5E+11
4E+11 Number of Computation Steps Required
3.5E+11 by Conventional Neural Networks
Number of Computation Steps Required
3E+11 by Fast Neural Networks
2.5E+11
2E+11
1.5E+11
1E+11
5E+10
0
100 300 500 700 900 1100 1300 1500 1700 1900
Size of two dimensional input matrix
Fig. 15 A comparison between the numbers of computation steps required by fast and conventional neural networks in case of real-valued two
dimensional input matrix and complex-valued weight matrix (n=30)
International Scholarly and Scientific Research & Innovation 2(10) 2008 3599 ISNI:0000000091950263
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:2, No:10, 2008
18
16
14
Speed up Ratio
12
10
8
6
Speed up Ratio (n=20)
4 Speed up Ratio (n=25)
2 Speed up Ratio (n=30)
0
Open Science Index, Computer and Information Engineering Vol:2, No:10, 2008 waset.org/Publication/12198
100 300 500 700 900 1100 1300 1500 1700 1900
Size of two dimensional input matrix
Fig. 16 The relation between the speed up ratio and the size of two dimensional real-valued input matrix in case of complex-valued weights
40
35
30
Speed up Ratio
25
20
15
10 Speed up Ratio (n=20)
Speed up Ratio (n=25)
5 Speed up Ratio (n=30)
0
100 300 500 700 900 1100 1300 1500 1700 1900
Size of two dimensional input matrix
Fig. 17 Practical speed up ratio in case of two dimensional real-valued input matrix and complex-valued weights
International Scholarly and Scientific Research & Innovation 2(10) 2008 3600 ISNI:0000000091950263
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:2, No:10, 2008
4.5E+11
4E+11 Number of Computation Steps Required
3.5E+11 by Conventional Neural Networks
Number of Computation Steps Required
3E+11 by Fast Neural Networks
2.5E+11
2E+11
1.5E+11
1E+11
5E+10
0
Open Science Index, Computer and Information Engineering Vol:2, No:10, 2008 waset.org/Publication/12198
Fig. 18 A comparison between the numbers of computation steps required by fast and conventional neural networks in case of complex-valued
one dimensional input matrix and complex-valued weight matrix (n=400)
Number of Computation Steps
7.00E+11
6.00E+11 Number of Computation Steps Required
by Conventional Neural Networks
5.00E+11 Number of Computation Steps Required
by Fast Neural Networks
4.00E+11
3.00E+11
2.00E+11
1.00E+11
0.00E+00
10000 2E+05 5E+05 1E+06 2E+06 3E+06 4E+06
Length of one dimensional input matrix
Fig. 19 A comparison between the numbers of computation steps required by fast and conventional neural networks in case of complex-valued
one dimensional input matrix and complex-valued weight matrix (n=625)
International Scholarly and Scientific Research & Innovation 2(10) 2008 3601 ISNI:0000000091950263
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:2, No:10, 2008
Fig. 20 A comparison between the numbers of computation steps required by fast and conventional neural networks in case of complex-valued
one dimensional input matrix and complex-valued weight matrix (n=900)
50
Theoretical Speed up ratio (n=400)
Theoretical Speed up ratio (n=625)
40 Theoretical Speed up ratio (n=900)
Speed up Ratio
30
20
10
0
10000 2E+05 5E+05 1E+06 2E+06 3E+06 4E+06
Length of one dimensional input matrix
Fig. 21 The relation between the speed up ratio and the length of one dimensional input real-valued matrix in case of complex-valued weights
International Scholarly and Scientific Research & Innovation 2(10) 2008 3602 ISNI:0000000091950263
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:2, No:10, 2008
80
70
60
Speed up Ratio
50
40
30
20 Practical Speed up ratio (n=400)
Practical Speed up ratio (n=625)
10 Practical Speed up ratio (n=900)
0
Open Science Index, Computer and Information Engineering Vol:2, No:10, 2008 waset.org/Publication/12198
Fig. 22 Practical speed up ratio in case of one dimensional complex-valued input matrix and complex-valued weights
4E+11
Number of Computation Steps
2E+11
2E+11
1E+11
5E+10
0
100 300 500 700 900 1100 1300 1500 1700 1900
Fig. 23 A comparison between the numbers of computation steps required by fast and conventional neural networks in case of complex-valued
two dimensional input matrix and complex-valued weight matrix (n=20)
International Scholarly and Scientific Research & Innovation 2(10) 2008 3603 ISNI:0000000091950263
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:2, No:10, 2008
7E+11
6E+11 Number of Computation Steps Required
by Conventional Neural Networks
5E+11 Number of Computation Steps Required
by Fast Neural Networks
4E+11
3E+11
2E+11
1E+11
0
Open Science Index, Computer and Information Engineering Vol:2, No:10, 2008 waset.org/Publication/12198
100 300 500 700 900 1100 1300 1500 1700 1900
Size of two dimensional input matrix
Fig. 24 A comparison between the numbers of computation steps required by fast and conventional neural networks in case of complex-valued
two dimensional input matrix and complex-valued weight matrix (n=25)
9.00E+11
Number of Computation Steps
4.00E+11
3.00E+11
2.00E+11
1.00E+11
0.00E+00
100 300 500 700 900 1100 1300 1500 1700 1900
Fig. 25 A comparison between the numbers of computation steps required by fast and conventional neural networks in case of complex-valued
two dimensional input matrix and complex-valued weight matrix (n=30)
International Scholarly and Scientific Research & Innovation 2(10) 2008 3604 ISNI:0000000091950263
World Academy of Science, Engineering and Technology
International Journal of Computer and Information Engineering
Vol:2, No:10, 2008
40
35
30
Speed up Ratio
25
20
15
10 Speed up Ratio (n=20)
Speed up Ratio (n=25)
5 Speed up Ratio (n=30)
0
Open Science Index, Computer and Information Engineering Vol:2, No:10, 2008 waset.org/Publication/12198
100 300 500 700 900 1100 1300 1500 1700 1900
Size of two dimensional input matrix
Fig. 26 The relation between the speed up ratio and the size of two dimensional real-valued input matrix in case of complex-valued weights
70
60
Speed up Ratio
50
40
30
20 Speed up Ratio (n=20)
Speed up Ratio (n=25)
10 Speed up Ratio (n=30)
0
100 300 500 700 900 1100 1300 1500 1700 1900
Size of two dimensional input matrix
Fig. 27 Practical speed up ratio in case of two dimensional complex-valued input matrix in and complex-valued weights
International Scholarly and Scientific Research & Innovation 2(10) 2008 3605 ISNI:0000000091950263