Sei sulla pagina 1di 6

International Research Journal of Computer Science (IRJCS)

Issue 10, Volume 3 (October 2016)

ISSN: 2393-9842
www.irjcs.com

Parameter Adjustment of Pulse Coupled Neural Networks


Based on White Pixels Evaluation
Harwikarya*
Department of Informatics, Mercu Buana University

Desi Ramayanti
Department of Informatics, Mercu Buana University

Abstract This paper presents a new method to automatic stop the iteration of Pulse Coupled Neural Networks.
(PCNN) by evaluating the numbers of white pixels. The PCNN is used to segment the image which has object and
background. This paper showed how to use PCNN to resolve object and background by adjusting two parameters,
linking coefficient and number of iteration based on numbers of white pixels in iterated images or binary images. The
results of experiments showed the number of iteration for 2 images is opposite one and other, in the first image
number of iteration is small or in early iteration the white pixels is saturated and in the second image the number of
operation is large compare to first one, or the white pixels is saturated after a couple of iteration.
Keywords PCNN, segmentation, iteration.
I. INTRODUCTION
In this paper we presented the new method to adjust the linking coefficient , one of some parameters in PCNN equations.
Based on activity in cat visual network, in light of synchronous dynamic of neural activity, the cortical model as a bioinspired neural network was invented [1],[2]. The cortical model has the special ability to cause the adjacent neurons to
pulse synchronously when the inputs to neuron are same. Time to time cortical model has been judged as having
challenge potential in image processing. The cortical model was tailored and modified by Eckhorn and became Pulse
Coupled Neural Networks (PCNN), a new neural networks architecture. Therefore, Johnson at al. developed the PCNN
to become more suitable for image processing [3]-[5]. Recently many researcher attempted to simply the equation of
PCNN to achieve lower computational while remaining the visual cortical property. PCNN has 3 simplified models as
follow the intersecting cortical model [6][8], the unit linking PCNN model [9], [10] and the spiking cortical model
(SCM) [11]. PCNN was developed rapidly in image processing domain such as image understanding [12], pattern
recognition [13] ], [14] and object recognition [15],[16]. Image segmentation [17], image shadow removal [18], feature
extraction [5], [14.] The assessment of detailed applications of PCNN can be found from literatures of Lindblad and
Kinser [8], Ma [20] and Wang [21].
PCNN has great potential for developing image segmentation algorithm among the applications in image processing as
explained before. Despite the PCNN needs no training, all parameter in the equation must be tuned. The results quality of
the segmentation is depends on the optimal values of PCNN parameters. The only one lack of PCNN is all parameters
must be manually tuned or manually adjusted, this fact make a heavy training and finally makes a seriously problem and
will constraint the development of PCNN in the future. Kuntimad and Ranganath [17] in their research analyzed the
range or minimum and maximum values of linking strength for a perfect image segmentation after acquiring the
intensity ranges of object and background and determined the minimum amplitude value of dynamic threshold which
guarantees each neuron fires exactly once during a pulsing cycle. Karvonen proposed the distribution of information
from the segment on radar image as reference or based to count as a linking strength [18]. Stewart et al. [19] proposed
a region growing PCNN. Despite this proposed PCNN is worked good for multi-value image segmentation, it requires to
set the time-variant dynamic threshold and linking strength manually by incrementing constants. Bi et al. introduced
new method to optimize and weight matrix, this method determined the relation of these two parameters and spatial and
gray characteristics by setting the amplitude and decrement constant of dynamic threshold [24]. Yonekawa et al. adjusted
the linking strength , synaptic weight W, and time constant or exponential decay automatically through many
iterations of trial [23]. In a words the previous methods mostly could not adjust all the parameters in the equation of
PCNN properly. We have a good results as follow; Berg et al. proposed to evolve PCNN neurons by automatic design of
algorithms through evolution [21], Ma et al. proposed to build an automated PCNN system based on genetic algorithm
[22].
_________________________________________________________________________________________________
IRJCS: Impact Factor Value - Scientific Journal High Impact Factor value for 2014= 2.023
2014-16, IRJCS- All Rights Reserved
Page -1

International Research Journal of Computer Science (IRJCS)


Issue 10, Volume 3 (October 2016)

ISSN: 2393-9842
www.irjcs.com

Despite the results are good these two algorithms require a heavy training to achieve the proper values of parameters, it
costs time and constrains the implementation. Here in this paper, we proposed new method to count the optimal value of
or linking strength based on sets adjustment condition to stop PCNN algorithm through evaluating image processing
results so that PCNN iterates automatically. In this proposed method we count a number of white pixels in binary image
acquired from iterated image. To optimize the parameter of PCNN we have 3 methods the first method sets adjustment
condition to stop PCNN algorithm through evaluating image processing results so that PCNN iterates automatically. The
second method finds out the characteristics and changing trends of the parameters in different image processing tasks
through analyzing the characteristics (mechanism) of PCNN parameters qualitatively and quantitatively, then given the
specific calculation formula for each parameter. The third method transforms parameter setting into multi objective
optimization problem. Concerning different image processing tasks, it searches optimal parameter combination by
means of optimization algorithm to achieve PCNN self-adaption. We fixed the first group and second group
parameter of PCNN and we have to vary . We then iterate the test image and get the binary image as results. The test
image has an object and background inside. Giving the first value of we iterated PCNN, from each iteration we count
the number of white pixels and stop iteration when the number of white pixels becomes saturated. By giving the second
value of we then continue to iterated PCNN and treat as previous step. We compare all results from every value of
and we could state the optimal value of or linking strength. General flowchart of proposed method is shown in Figure
1.1.

Figure 1.1 General flowchart of proposed method.


II. PULSE COUPLED NEURAL NETWORKS
Recently PCNN , after Johnson at al. invented and developed the model, PCNN finally has 5 equations as follow

Fij [n] e F Fij [n 1] S ij VF m ijklYkl [n 1]

2.1

kl

Lij [ n ] e L Lij [ n 1] V L w ijkl Ykl [n 1]

2.2

kl

U ij [n] Fij [n](1 Lij [n] )

2.3

ij [n] e ij [n 1] V Yij [n 1]

2. 4

1 , if U ij [ n ] ij [ n ]
Yij [ n ]
0, otherwise

2. 5

_________________________________________________________________________________________________
IRJCS: Impact Factor Value - Scientific Journal High Impact Factor value for 2014= 2.023
2014-16, IRJCS- All Rights Reserved
Page -2

International Research Journal of Computer Science (IRJCS)


Issue 10, Volume 3 (October 2016)

ISSN: 2393-9842
www.irjcs.com

The equations have 7 parameter inside as follow F, L, , VF, VL, V and . First group of parameter are F, L, ,
these parameters are attenuation time constant. The second groups are, VF, VL, V are voltage potential, index F means
feeding, L means linking and means dynamic threshold. is the linking coefficient. M and W are the weight matrix,
the value of this matrix depend on the field of the surrounding neurons, i.e. the linking field. Here the example if each
neuron is surrounded by eight neurons, the linking field is a 3x3 matrix. As a result, M and W are 3x3 matrix too and the
characteristic of M and W are refer to the Gaussian weight depend on distance and they are usually chosen as the same.
The index (i, j) is the position of a neuron in the network and a pixel in the input image. For example, if the input mage
size is 512x512 pixels, the range of indexes (i,j) are between (1,1) and (512,512). The index (k,l) represent the positions
of the surrounding F, L, neurons; n is the iterative step number; and Si,j is the gray level of the input image pixel.
Fi,j[n] is the feeding, Li,j[n] is the linking, Ui,j[n] is internal activities, and i,j[n] is dynamic threshold, some
references stated i,j[n] as Ti,j[n] or threshold. The pulse output of a neuron, Yi,j[n], is the binary value which indicates
the status of the neuron. Yij[n] equals to 1 means the neuron is activated. Therefore the activation of a neuron at nth
iteration depended on its center pixel value in window or Si,j, output from the surrounding neurons at the previous
iteration or Yk,l[n-1], the internal signals of the neuron itself at the previous iteration as feeding or F,ij[n-1], linking or
Li,j[n-1], internal activities or Ui,j[n-1], and the last is threshold or i,j[n- 1]). Entire network output Yij[n] shape a
binary image, at certain iteration binary image could contains the information of an input image such as features in the
input image, class information, and edge information,. The neuron model according Johnson [4] and field diagram of
PCNN are shown in Figure 2.1.

Figure 2.1. PCNN neuron model [4].


Input of neuron consist of feeding and linking, feeding receive directly the pixel value Si,j which represent the center
pixel of observation window. This window has a dimension 3x3, 5x5 and 7x7 pixels. More high dimension window more
lost detail results of iterated image, therefore it must be compromised choosing its dimension. It has multiplier and adder
to form mathematical equation 2.1. to 2.5. In the Figure 2.2 feeding and linking could be grouped into receptive field as
equation 2.1 and 2.2 , followed by modulation field as equation 2.3 and pulse generator as equation 2.4 and 2.5.

Figure 2.2. PCNN field diagram.


III. RESULTS AND DISCUSSIONS
This experiments work with 2 object Lena and Ant Lena is bmp image which dimension 512x512 pixels. Ant is PNG
image which dimension 400x400 pixels. The results of experiments are shown in Figure 3.1.

_________________________________________________________________________________________________
IRJCS: Impact Factor Value - Scientific Journal High Impact Factor value for 2014= 2.023
2014-16, IRJCS- All Rights Reserved
Page -3

International Research Journal of Computer Science (IRJCS)


Issue 10, Volume 3 (October 2016)

ISSN: 2393-9842
www.irjcs.com

A. EXPERIMENT 1 WITH LENA.BMP


Lena is such kind of grey level picture which in BMP forms. This picture has an object, Lena herself and back ground. In
this picture object and back ground has mostly similar pixels value, therefore object and back ground are overlap one and
other. We set the first group parameter and second group presented in Table 3.1.
NO

TABLE 3.1. - FIXED PARAMETER


PARAMETERS

Attenuation time
constant

Voltage Potential

Linking Coefficient
Number of iteration

F
L

VF
VL
V

VALUE
0.01
0.01
0.01
0.5
0.5
1
2 -10
2 - 30

The results are shown Figure 3.3, in first and second row. In the first row original image, Lena, is iterated which
parameter shown in Table 3.1. We then varied the number of iteration n while varying the linking coefficient. Therefore
one value of linking coefficient has a group of number of iteration; in this experiment we limited the range from 2 to 30.
In the Figure 3.3 we only displayed two group of linking coefficient due to limited page. Lena has best results in the first
iteration and when the numbers of iteration increase it become dark. In second row we change linking coefficient and
again Lena has best result in low iteration. When the number of white pixels become saturate PCNN stop to iterate, Lena
become dark as previous linking coefficient value. The histogram of Lena is shown is Figure 3.4.

Figure 3.3. Results of iterated PCNN


B. EXPERIMENT 1 WITH ANT.PNG
In this experiment we work with different form of object. We use a picture of Ant; Ant is a grey level picture in PNG
form. This picture has an object, Ant itself and back ground. In this picture object and back ground has different pixels
value, therefore object and back ground are really clear.
_________________________________________________________________________________________________
IRJCS: Impact Factor Value - Scientific Journal High Impact Factor value for 2014= 2.023
2014-16, IRJCS- All Rights Reserved
Page -4

International Research Journal of Computer Science (IRJCS)


Issue 10, Volume 3 (October 2016)

ISSN: 2393-9842
www.irjcs.com

We set the first group parameter and second group presented in Table 3.1. like previous experiment. This experiment run
opposite of previous experiment. In the low iteration Ant only change in small number of picture and when the number
of iteration is increase the white is increase too. In the third row the result is unsatisfied due to unclear picture of object.
The best result on third row cant resolve the object from its back ground.
For different linking coefficient on forth row, it gives good results. In column 5 and 6 , PCNN can resolve the object
from the background pretty good. The object become white and back ground became black. We stop the iteration due to
saturation in number of white pixels. In the column 5 and 6 the Ant seem its shape but not exactly its shape showed in
original image. Column 6 give better results than column 5 but the shadow of the Ant appeared, and the body of the Ant
contain the black pixels. The histogram of original image of Ant and results of the iteration is shown in Figure 2.4.The
results is perfect if and only if all the Ant shape are in white pixels. In this experiments the optimal linking coefficient is
optimal due to a maximum number of white pixels. The Ant became to black pixels if the linking coefficient was
increased. This means there is another parameter to be tuned simultaneously, it could be a first group or second group
parameter. This results guide the authors to pursuit the research combining the first or second group parameter with
linking coefficient while keeping the number of white pixels as a base to stop the iteration automatically.

Figure 3.4. Histogram of original and binary image of Lena.and Ant.


IV. CONCLUSIONS
The Pulse Coupled Neural Networks is capable of segmenting object and back ground from 2 experiments images.
Despite results of these 2 images are pretty good we have to adjust the parameter of linking coefficient more precise. In
this paper our major contribute is quantitative approach to stop iteration of PCNN. Results showed that for Lena image
the iteration stop earlier due to white pixels saturation. On the other side results of Ant image stopped the iteration when
n was 20. These results give as that linking coefficient and number of iteration are the primary parameter to be adjusted
to have good results. In the future research we have to work with the same format images, a series of BMP, PNG, JPEG
and other special image like MRI, X-Ray, and Radar image to have apple to apple comparison.
ACKNOWLEDGMENT
The authors would like to thank Dr. Devi Fitrianah, MTI, a head of Research Center, Mercu Buana University for
funding this research and all colleague at Computing Laboratory Department of Informatics Faculty of Computer
Sciences for supporting this research.
_________________________________________________________________________________________________
IRJCS: Impact Factor Value - Scientific Journal High Impact Factor value for 2014= 2.023
2014-16, IRJCS- All Rights Reserved
Page -5

International Research Journal of Computer Science (IRJCS)


Issue 10, Volume 3 (October 2016)

ISSN: 2393-9842
www.irjcs.com

REFERENCES
[1]

R. Eckhorn, H. J. Reitboeck, M. Arndt, and P. W. Dicke, Feature linking via synchronization among distributed
assemblies: Simulations of results from cat visual cortex, Neural Comput., vol. 2, no. 3, pp. 293307, 1990.
[2] R. Eckhorn, H. J. Reitboeck, M. Arndt, and P. W. Dicke, A neural network for feature linking via synchronous
activity: Results from cat visual cortex and from simulations, in Models of Brain Function. Cambridge, U.K.:
Cambridge Univ. Press, 1989, pp. 255272.
[3] J. L. Johnson and D. Ritter, Observation of periodic waves in a pulse coupled neural network, Opt. Lett., vol. 18,
no. 15, pp. 12531255, Aug. 1993.
[4] J. L. Johnson, Pulse-coupled neural nets: Translation, rotation, scale, distortion, and intensity signal invariance for
images, Appl. Opt., vol. 33, no. 26, pp. 62396253, Sep. 1994.
[5] J. L. Johnson, Pulse-coupled neural networks, Proc. SPIE Adapt.Comput.: Math., Electron., Opt. Crit. Rev., vol.
CR55, pp. 4776, Apr. 1994.
[6] J. M. Kinser, Simplified pulse-coupled neural network, Proc. SPIE Appl. Sci. Artif. Neural Netw. II, vol. 2760, pp.
563567, Mar. 1996.
[7] U. Ekblad, J. M. Kinser, J. Atmer, and N. Zetterlund, The intersecting cortical model in image processing, Nucl.
Instrum. Methods Phys.Res. Sec. A: Accel., Spect., Detect. Assoc. Equip., vol. 525, nos. 12, pp. 392396, Jun.
2004.
[8] T. Lindblad and J. M. Kinser, Image Processing Using Pulse-Coupled Neural Networks, 2nd ed. New York:
Springer-Verlag, 2005.
[9] X. Gu, Feature extraction using unit-linking pulse coupled neural network and its applications, Neural Process.
Lett., vol. 27, no. 1, pp. 2541, Feb. 2008.
[10] X. Gu, L. Zhang, and D. Yu, General design approach to unit-linking PCNN for image processing, in Proc. Int.
Joint Conf. Neural Netw., vol. 3. Jul.Aug. 2005, pp. 18361841.
[11] K. Zhan, H. Zhang, and Y. Ma, New spiking cortical model for invariant texture retrieval and image processing,
IEEE Trans. NeuralNetw., vol. 20, no. 12, pp. 19801986, Dec. 2009.
[12] H. S. Ranganath, G. Kuntimad, and J. L. Johnson, A Neural Network for Image Understanding. Oxford, U.K.:
Oxford Univ. Press, 1997
[13] K. Waldemark, T. Lindblad, V. Becanovic, J. L. Guillen, and P. L. Klingner, Patterns from the sky: Satellite image
analysis using pulse coupled neural networks for pre-processing, segmentation and edge detection, Pattern
Recognit. Lett., vol. 21, no. 3, pp. 227237, Mar. 2000.
[14] R. C. Muresan, Pattern recognition using pulse-coupled neural networks and discrete Fourier transforms,
Neurocomputing, vol. 51, pp.487493, Apr. 2003.
[15 ] J. M. Kinser and J. L. Johnson, Object isolation, Opt. Memor. NeuralNetw., vol. 5, no. 3, pp. 137145, 1996.
[16] H. S. Ranganath and G. Kuntimad, Object detection using pulse coupled neural networks, IEEE Trans. Neural
Netw., vol. 10, no. 3, pp. 615620, May 1999.
[17] G. Kuntimad and H. S. Ranganath, Perfect image segmentation using pulse coupled neural networks, IEEE Trans.
Neural Netw., vol. 10, no. 3, pp. 591598, May 1999.
[18] J. A. Karvonen, Baltic sea ice SAR segmentation and classification using modified pulse-coupled neural
networks, IEEE Trans. Geosci.Remote Sens., vol. 42, no. 7, pp. 15661574, Jul. 2004.
[19] R. D. Stewart, I. Fermin, and M. Opper, Region growing with pulse coupled neural networks: An alternative to
seeded region growing, IEEE Trans. Neural Netw., vol. 13, no. 6, pp. 15571662, Nov. 2002.
[20] Y. Lu, J. Miao, L. Duan, Y. Qiao, and R. Jia, A new approach to image segmentation based on simplified region
growing PCNN, Appl.Math. Comput., vol. 205, no. 2, pp. 807814, Nov. 2008.
[21] H. Berg, R. Olsson, T. Lindblad, and J. Chilo, Automatic design of pulse coupled neurons for image
segmentation, Neurocomputing, vol. 71, nos. 1012, pp. 19801993, Jun. 2008.
[22] Y. D. Ma and C. L. Qi, Study of automated PCNN system based on genetic algorithm, J. Syst. Simul., vol. 18, no.
3, pp. 722725, 2006.
[23] M. Yonekawa and H. Kurokawa, An automatic parameter adjustment method of pulse coupled neural network for
image segmentation, in Proc. Artif. Neural Netw., Limassol, Cyprus, 2009, pp. 834843.
[24] Y. Bi, T. Qiu, X. Li, and Y. Guo, Automatic image segmentation based on a simplified pulse coupled neural
network, in Advances in Neural Networks, vol. 3174, F.-L. Yin, J. Wang, and C. Guo, Eds. Berlin, Germany:
Springer-Verlag, 2004, pp. 199211.

_________________________________________________________________________________________________
IRJCS: Impact Factor Value - Scientific Journal High Impact Factor value for 2014= 2.023
2014-16, IRJCS- All Rights Reserved
Page -6

Potrebbero piacerti anche