Sei sulla pagina 1di 6

2010 12th International Workshop on Cellular Nanoscale

Networks and their Applications (CNNA)

Analysis of Biomedical Textured Images with


Application of Synchronized Oscillator-based CNN
Michal Strzelecki

Joonwhoan Lee, Sung-Hwan Jeong

Institute of Electronics
Technical University of Lodz
Lodz, Poland
mstrzel@p.lodz.pl

Department of Computer Engineering


Chonbuk National University
Jeonju, Republic of Korea
chlee@jbnu.ac.kr, shjeong@jbnu.ac.kr

AbstractThis paper is focused on the analysis of biomedical


images, including textured ones. A segmentation method, based
on network of synchronized oscillators is presented. Oscillator
networks can be considered as a special case of the CNN. Its
oscillatory dynamics allows encoding the different features of
objects forming the visual scene, thus makes these network
suitable for medium level image processing, like image
segmentation. Oscillator networks can process both two and
three dimensional images. The proposed method was tested on
several biomedical images acquired with the use of different
modalities. Principles of operation of the oscillator networks are
described and discussed. Obtained segmentation results for
sample 2D and 3D biomedical images are presented and
compared to image segmentation based on multilayer
perceptron network (MLP).
Keywords-oscillator
textured images

I.

network,

segmentation,

biomedical

INTRODUCTION

Automation of medical diagnosis often requires


automatic analysis of biomedical images. Usually, such
images contain a large variety of different textures. Visual
texture is a rich source of image information, but its
definition is rather descriptive than formal. Texture is
considered as complex visual patterns composed of spatially
organized entities that have characteristic brightness, color,
shape and size. These local sub-patterns are characterized by
given coarseness, fineness, regularity, smoothness etc [1].
Texture is also homogeneous for human visual system. These
textures reflect cross-sections or projections of different
human tissues or organs. Thus, there is a strong need to
develop image texture processing techniques, firstly to obtain
new information, directly not available for human eyes [2],
[3]. Secondly, these techniques are essential in automatic and
semi-automatic systems for medical diagnosis support [4].
This work focuses on biomedical image segmentation
technique, where image is divided into disjoint homogeneous
regions. This is a preliminary step which allows further
image analysis, like identification or classification of
segmented human tissues.
Segmentation method presented in this paper
implements network of synchronized oscillators of the
Terman-Wang type [5]. This recently developed tool that is
based on temporal correlation theory attempts to explain

978-1-4244-6678-8/10/$26.00 2010 IEEE

scene recognition as it would be performed by a human brain.


This theory assumes that different groups of neural cells
encode different properties of homogeneous image regions
(e.g. shape, color, texture). Monitoring of temporal activity of
oscillating cell groups allows detection of such image regions
and consequently, leads to scene segmentation. Coherent
oscillations were discovered in e.g. visual cortex neural cells,
where synchronized neuronal oscillations with in frequency
range 35-85 Hz were reported.
Synchronized oscillator network (SON) is similar to a
cellular neural network (CNN) introduced by Chua [6]. Both
networks possess local feedback connections; the output is
nonlinear and depends on weighted sum of neighbouring
cells. The most important difference between these two
networks is their units dynamics. It is equilibrium dynamics
in case of CNN and oscillatory one for SON. Oscillations
offer additional degree of freedom related to phase, which is
employed to encode the different features of objects forming
the visual scene [7]. Such a property makes the oscillator
network more suitable for medium level image processing,
for instance image segmentation.
SON was successfully used for segmentation of Brodatz
textures, biomedical images and also textured images [8].
The advantage of this network is its adaptation to local image
changes (related both to the image intensity and texture),
which in turn ensures correct segmentation of a noisy and
blurred image fragments. Another advantage is that
synchronized oscillators do not require any training process,
unlike the artificial neural networks. In consequence, this
leads to shorter analysis time. Finally, such a network can be
manufactured as a VLSI chip for very fast image
segmentation [9].
This paper is organized as follows: Section II presents
operation principles of the oscillator network. Extension of
the SON concept to the three-dimensional case for
segmentation of 3D images is also presented. Segmentation
results obtained by means of SON and MLP (multilayer
perceptron network) are compared and discussed in Section
III. Finally, Section IV concludes the paper.

II.

NETWORK OF SYNCHRONISED OSCILLATORS

To implement the image segmentation technique based on


temporary correlation theory, an oscillator network was
proposed. Each network oscillator is defined by two
differential equations [5]-[10]:

dx
3x x 3 2 y IT
dt

greater value then Wz.


The weight Wik connecting two neighbor oscillators which
represent image pixels i and k is defined by (3),

dy
x
[ (1 tanh( )) y ]
dt

where x is referred to as an excitatory variable while y is an


inhibitory variable. IT is a total stimulation of an oscillator
and, , are constant parameters which influence the
oscillation period and its filling ratio. The x-nullcline is cubic
curve while the y-nullcline is a sigmoid function as shown in
Fig. 1(a). If IT >0, then equation (1) possesses periodic
solution, represented by bold black line shown in Fig. 1(a).
The operating point moves along this line, from the left
branch (LB - it represents so-called silent phase), then jumps
from the left knee (LK) to right branch (RB - it represents socalled active phase), next reaches the right knee (RK) and
jumps again to the left branch. If IT0, the oscillator is inactive
(produces no oscillation). Oscillators defined by (1) are
connected to form a two-dimensional network. In the simplest
case, each oscillator is connected only to its four nearest
neighbors (2D case) or to 26 neighbors (3D case). Larger
neighborhood sizes are also possible. Sample networks are
shown in Fig. 1(b), (c). Network dimensions are equal to
dimensions of the analyzed image and each oscillator
represents a single image pixel. Each oscillator in the network
is connected with so-called global inhibitor (GI in Fig. 1(b),
(c) ) which receives information from oscillators and in turn
eventually can inhibit the whole network. Generally, the total
oscillator stimulation IT is given by (2) [5]-[10]:
IT

Wik H ( xk x ) WzGI

k N ( i )

where Wik are synaptic dynamic weights connecting oscillator


i and k. Number of these weights depend on neighborhood
size N(i). In the case presented in Fig. 1(b), N(i) contains 4
nearest neighbors of ith oscillator for 2D case and 6 for 3D
extension (except for oscillators located on network
boundaries).
Due to these local excitatory connections an active
oscillator spreads its activity over the whole oscillator group,
which represent image object. This provides synchronization
of this group. X is a threshold above which oscillator k
becomes active. H is a Heaviside function, it is equal to one if
its argument is higher then zero and zero otherwise. Wz is a
weight of inhibitor GI, it is equal one if at least one network
oscillator is in active phase (x>0) and it is equal to zero
otherwise. The role of global inhibitor is to provide desynchronization of oscillator groups representing different
objects from this one which is actually being under
synchronization. Global inhibitor will not affect any
synchronized oscillator group because the sum in (2) has

a)

Wik

GI
b)

GI

W ik

c)
Figure 1. Nullclines and trajectory of eq. (1) (a). A fragment of oscillator
network for 2-dimenasional (b) and 3-dimensional case (c). Each oscillator is
connected with its neighbors using positiveGI
weights Wik. GI means the global
inhibitor, connected to each network oscillator.

Wik

No L

fi f k 1

where fi , fk are a grey levels of pixels i and k respectively, No


is a number of active oscillators in neighborhood N(i) and L
is a number of image grey levels. Hence, weights are high for
homogeneous regions and low for region boundaries.
Because excitation of any oscillator depends on the sum of
weights of its neighbors, all oscillators in the homogeneous
region oscillate in synchrony. Different oscillators groups
represent each region. Oscillators activation is switching
sequentially between groups in such a way that, at a given
time only one group (representing given region) is
synchronously oscillating.
In case of segmentation of texture images, network
weights are set according to (4) [10]:

A
Wij

f f
k 1
s

k 1

k
N (i )

| f i f |
k 1

k
j

where A is a number of active oscillators in neighborhood


N(i), fik, fjk correspond to k-th texture feature evaluated for
oscillators i and j respectively,

f Nk (i ) is a mean value of

feature f k calculated for active oscillators in neighborhood


N(i) and s is the number of texture parameters. These
parameters are evaluated for some windows centered at
pixels i and j. They are selected to ensure good separation of
analyzed textures. Hence, weights are high for homogeneous
texture regions and low for region representing texture
boundaries. Thus, network operation is exactly the same as in
case of gray-level images. Segmentation of texture regions is
performed by analysis of oscillators outputs.
A segmentation algorithm using oscillator network was
presented in [11]. It is based on simplified oscillator model
and does not require solution of (1) for each oscillator; it can
be also easily adapted to 3D network. This algorithm was used
for segmentation of different biomedical images [12], [13],
[16].
III.

IMAGE SEGMENTATION EXAMPLES AND DISCUSSION

Proposed method was applied for analysis of optical


images of leg ulcers shown in Fig. 2(a). Leg ulcers result from
venous, arterial or lymphatic blood vessel disturbances. The
aim of this research was to elaborate a new technique of
image analysis based on photographic and digital image data
for evaluation of leg ulcers extensively and treatment progress
[12]. Images shown in Fig. 2 presents sample leg ulcers and
segmentation results which enable automatic ulcer area
estimation. Comparison of ulcer areas evaluated for the same
patient (based on subsequent photos taken after some period)
enables evaluation the efficiency of the healing process.
Segmentation results for oscillator network are presented in
Fig. 2(b). In this case, network weights were set according to
(3).
Described method was also tested on sample magnetic
resonance biomedical images, representing human foot cross-

section, containing heel and metatarsus bones. These images


were recorded in German Cancer Research Centre,
Heidelberg, Germany using a 1.5T Siemens scanner. An

a)

b)

Figure 2. Optical image of leg ulcers (a), segmentation result using SON
(b).

example image of size 512512 is shown in Fig. 3(a).


Segmentation of these images is aimed at detection of foot and
heel bones (marked by white lines) from other tissues and
image background. The extracted region is interpreted by
physicians to evaluate bone micro architecture in osteoporosis
diagnosis. As texture features, the Gaussian-Markov random
field (GMFR) model parameters were assumed. It was
demonstrated in [13] that GMRF model fit very well to this
class of textures. Thus the GMRF parameters were estimated
for each image point (a vector of 15 parameters). Then
nonlinear discriminate analysis [13] was performed, as in case
of echocardiograms. Two new nonlinear features were used to
form oscillator network weights. Three following
segmentation tools were tested on the same texture feature set:
multilayer perceptron network and oscillator network.
Segmentation results are shown in Fig. 3(b), (c) respectively.
The results for perceptron and oscillator network programmed
for region detection are similar. Generally, the bones were
correctly extracted from the image background. A number of
segmentation errors is visible at bone boundary. Another type
of error can be observed in image background, where some
tissue regions were identified as bone by multilayer
perceptron (lower right part of Fig. 3(b), (c) marked by white
rectangle). In case of oscillator network this fragment was
correctly recognized. The better performance of oscillator
network can be explained by local interaction between
oscillators, which are activated based on active neighbors.
Suppose that uniform region contains a point or small group
of points, which differ in some way from this region. Then,
during network operation on this region, active oscillators
assigned to it will surround these points. Thus, the oscillator
weights connected to these points will increase because of
increasing factor A in (4). As a consequence, these oscillators
can be activated assigning the distinctive points to the uniform
region. Oscillator network provides bone object labeling that
can be useful for further processing.

The SON based segmentation method was finally


applied for segmentation of 3D liver images obtained using
magnetic resonance tomography (Philips Medical System).
These images were recorded in the University of Rennes,
France. Image size was equal to 192x192x80 voxels
(volumetric pixels), with resolution 2.08x2.08x2.0 mm/voxel,
and 12 bits per voxel. Sample axially oriented cirrhotic liver
cross-sections are presented in Fig. 4 (a), (b), (c). Liver shape
was marked by the radiologist using dotted line. The longrange objective of this research is to check, if morphological
changes which occur in liver (caused by liver diseases) are
reflected in their MR images. The objective of this particular
analysis was to detect a liver region from the rest of the
image [14].
Two segmentation methods were used to perform liver
segmentation. The first approach implemented oscillator
network. Network weights were set according to (3) and
neighborhood N(i)=26 was assumed Segmentation results are
presented in Fig. 4(d), (e), (f), where only liver tissue is
shown. Generally, the liver area was detected correctly, some
segmentation errors are visible close to liver boundary, where
some other tissues were classifies as liver. No veins and
artifacts were detected in internal part of the liver tissue. Next,
the MLP neural network was applied. It was trained with
training set containing 64 gray level samples drawn randomly
from both liver and background regions. This network had 8
neurons in the hidden layer and two neurons in the output
layer corresponding to two classes. The number of network
neurons limited to maintain appropriate relation between
network weights and number of samples in training set
(42/128<1/3). For network training, a b11 software was used
[15] using back propagation algorithm [16]. Segmentation
results are presented in Fig. 4. (g), (h), (i) (also in this case,
the detected liver region is displayed only). They are worst
when compared to SON, because other organs and tissues in
liver neighborhood were also classified as a liver. In almost all
slices liver veins and some artifacts were also detected.
As can be seen from Fig. 4, oscillator network provides
a better segmentation results when compared to these
obtained for the multilayer perceptron. It corresponds with a
shape marked by the expert. In case of MLP some other
organs and tissues were misclassified, while the oscillator
network was able to classify them properly. This is caused by
different weight setting for two considered segmentation
techniques: in the case of MLP, weights are evaluated during
learning process and then remain constant during image
segmentation. Thus, image classification performed by this
network and consequently, segmentation results depend on
training set selection. Gray level distribution in the whole
image may differ from this present in test set used for
network training. In consequence, several image fragments
might be wrongly segmented by the MLP. In the case of
oscillator network, its weights depend on the difference
between the gray levels of neighbor image pixels. Thus, if
gray levels for some part of liver region differ from their
neighbors, network weights can adapt to these changes. Due
to this ability, sufficient high weight values (according to (3))

to exceed oscillator activation threshold IT in (2) are obtained.


Consequently, the analyzed region will be correctly classified
as a liver. This phenomenon explains correct segmentation of
images shown in Fig. 4(d), (e), (f). Furthermore, SON is
computationally very simpler (no image preprocessing,
feature extraction and iterative calculations are required) and
relatively fast. Analysis of images from Fig. 4 using SON
took 4.6s using Pentium IV 2.6 GHz computer (3.5s in case
of MLP).
IV.

CONCLUSION

In this paper the image segmentation based on the


network of synchronized oscillators was presented. The
proposed segmentation technique provides reliable
segmentation results for analysed biomedical images,
including textured ones. Also, it was demonstrated that:
SON provides quite accurate segmentation results when
compared to MLP
The advantage of the proposed method is its resistance to
changes of visual image information and also to noise and
artifacts, often present in biomedical images
The proposed technique is simple and computationally
feasible, no image preprocessing, network training (as in
case of MLP) or iterative calculations are required
SON can be extended to 3D image segmentation
Further research will focus on application of oscillator
network to segmentation of 3D textured images.

a)

b)

[5]

[6]
[7]
[8]

[9]

c)
Figure 3. Sample MRI image of human foot cross-section (a), segmentation
results using multilayer perceptron (b) synchronised oscillator network (c)

[10]

[11]

[12]

REFERENCES
[1]

[2]

[3]
[4]

M. Hajek, M. Dezertova, A. Materka, R. and Lerski (Eds.), Texture


analysis for Magnetic Resonance Imaging, Prague:Med4 publishing,
2006
R. Lerski, et al., Image Texture Analysis An Approach to Tissue
Characterisation, Magnetic Resonance Imaging, vol. 11, pp. 873-887,
1993.
K. Zieliski, M. Strzelecki, Compuyerised analysis of biomedical
images, Warszawa:PWN, 2002 (in Polish)
S. Yu and L. Guan A CAD System for the Automatic Detection of
Clustered Microcalcifications in Digitized Mammogram Films, IEEE
Trans. on Medical Imaging, vol. 19, pp. 115-126 , 2000

2 a)

10
b)

[13]

[14]

[15]
[16]

D. Wang, Emergent Synchrony in Locally Coupled Neural


Oscillators, IEEE Trans. on Neural Networks, vol. 4, pp. 941948,
1995.
Chua L. O. and Yang L. Cellular neural networks: Applications,
IEEE Trans. Circuits and Systems, vol. 35, pp. 1273-1270, 1988.
Wang D., A comparison of CNN and LEGION networks,
Proceedings of IJCNN, 1735-1740, 2004.
N. Shareef, D. Wang and R. Yagel, Segmentation of Medical Images
Using LEGION, IEEE Trans. on Med. Imag., vol. 18, pp. 74-91,
1999.
J. Kowalski, M. Strzelecki and P. Majewski, CMOS VLSI Chip of
Network of Synchronised Oscillators: Functional Tests Results, Proc.
of IEEE Workshop on Signal Processing, Poznan Poland, pp. 71-76,
September 29, 2006.
E. esmeli and D. Wang, Texture Segmentation Using GaussianMarkov Random Fields and Neural Oscillator Networks, IEEE Trans.
on Neural Networks, vol. 12, pp. 394-404, 2001.
P. Linsay and D. Wang, Fast numerical integration of relaxation
oscillator networks based on singular limit solutions, IEEE Trans. on
Neural Networks, vol. 9, pp. 523-532, 1998.
A. Zalewska- Janowska, M. Strzelecki and A. Kwiecie, Evaluation
of the leg ulcers using computer based image analysis, Advances of
Dermatology and Alergology, vol. XXI, pp. 291-295, 2004.
M. Strzelecki, Segmentation of MRI trabecular-bone images using
network of synchronised oscillators, Machine Graphics and Vision,
vol. 11, pp. 77-100, 2002.
M. Strzelecki, J. de Certaines and S. Ko, Segmentation of 3D MR
Liver Images Using Synchronised Oscillators Network, Proc. of
ISITC, Jeonju, Korea, pp. 259-263, November 19-21, 2007.
A. Materka and A. Klepaczko, on-line b11 Users Manual, file
b11.chm available at http://www.eletel.p.lodz.pl/mazda Lodz , 2008.
A.K. Jain, R.P Duin and Jianchang Mao, Statistical pattern
recognition: a review, IEEE Trans. on Pattern Analysis and Machine
Intelligence, vol. 22, 2000, pp. 437.

2 d)

2 g)

10

10
e)

h)

18 c)

18 f)

18 i)

Figure 4. Sample cross-sections of MR cirrhotic human liver images extracted from 3D data (a,b,c). Segmentation results using network of synchronized
oscillators (d,e,f) and multilayer perceptron (g,h,i). Numbers indicate the slice number.

Potrebbero piacerti anche