Sei sulla pagina 1di 4

A NOVEL TECHNIQUE FOR UNSUPERVISED TEXTURE SEGMENTATION

M. A. Roula, A. Bouridane, A. Amira, P. Sage and P. Milligan


School of Computer Science
The Queen's Univeristy of Belfast
Belfast BT7 1NN
Northern Ireland, United Kingdom
Email: M.Roula@qub.ac.uk
ABSTRACT
Image texture segmentation is an important problem and
occurs frequently in many image processing applications.
Although, a number of algorithms exist in the literature.
Methods that rely on the use of Expectation-Maximisation
algorithm are gaining a growing interest. The main feature
of this algorithm is that it is capable of estimating the
parameters of mixture distribution. This paper presents a
novel unsupervised algorithm based on ExpectationMaximisation algorithm where the analysis is applied on
vector data rather than the grey level. This is achieved by
defining a likelihood function witch measures how the
estimated features are fitting the present data.
Experimental results on images containing various
synthetic and natural textures have been carried out and a
comparison with existing and similar techniques has
shown the superiority of the proposed method.

1. INTRODUCTION
Image segmentation is the process of partitioning an
image into homogenous regions. This task becomes
particularly difficult in the case of textured images. The
existing segmentation methods are commonly classified
according to the texture description. In the stochastic
based methods, textures are assumed to be a realisation of
a two-dimensional random field [2].
Several Methods based on Markov Random Fields
(MRFs) have been developed. In [2] the image pixel
intensities are modelled as a Gauss-Markov random field.
The parameters are estimated by clustering pixels' data
into no-overlapping regions of uniform texture. In these
methods, the crucial problem is the parameter estimation
and the regions label attribution. In [3], Bouman and Liu
have used simulated annealing to maximise a posterior
function by modeling textures as a causal nonhomogeneous Gaussian autoregressive random field.
Recently the finite mixture model has attracted a
substantial interest for image segmentation [4]. In this

formulation an observed image is considered as an


incomplete data set from a complete unknown data witch
is a mixture of several classes. The status of each pixel is
unknown and must be identified by the segmentation
process. An iterative Maximum-Likelihood (ML)
estimation scheme well known as the EM (ExpectationMaximisation) algorithm has been successfully used in all
those methods. But it is widely recognised that the EM
algorithm is computationally heavy to use and requires
good initial conditions for reliable performance [5].
Moreover, all the previous methods dealt with the case of
grey level as a mixture data. For textured images in
particular, it is very difficult to discriminate between
classes using first order histogram only. It will be shown
that the use of vector data inspired from the MRF model
can provide significant results and a fast convergence for
textured images and without any special constraints for
the initial conditions.
The rest of the paper is organised as follows: a brief
description of the gaussian classical mixture model and its
EM algorithm is given in section 2. The vector feature
extraction and how the EM algorithm is applied on
textured data are described in section 3. Section 4 is
concerned with a discussion of the experimental results
obtained while Section 5 gives a summary of the work.
2. THE CLASSICAL MIXTURE MODEL
The Classical Mixture Model (CMM) [6] can be defined
as follows: suppose xi is the ith observation of the random
variable X:, 1 i N where N is the number of
observation. Let f j x \ j ,1 j L be a set of L density

{ (

functions each having its parameter set j .


The density function of the random variable x is
modeled as a weighted sum of L density functions as:

f x \

) = j f j (x \ j )
j =1

where j ,1 j L are the weights set.

(1)

The aim of the maximum likelihood (ML) estimation


is to find out the set of and that maximises the
likelihood function P(x) with regards to the given data xi .
N

P( x) =

j f j (xi \ j )

(2)

i =1 j =1

In the case of a Gaussian distribution the parameter


set j consists of both the mean and the variance as
follows:

xi j 2

exp
(3)
f j xi \ j =
2( j ) 2
2 j

The observed data x is supposed to be a subset of a


complete data y. The EM algorithm is an iterative
algorithm consists by using an initial estimate ( 0) before

performing the following two steps at each iteration:


1: Estimation step to find the function
Q( \ ( p ) ) = E log f ( y \ ) \ x, ( p ) .
2: Maximisation step Find
( p +1) = arg max Q( \ ( p ) ) .

3.1. Defining the feature vector


The features used to discriminate textures are represented
by a vector K of (p+1) dimensions. The first p elements
correspond to the p cliques type used in the generalised
Ising model [7] while the (P+1)th element is the average
grey level of the neighbourhood pixels surrounding the
pixel at hand. For a given window size w, these elements
are computed as follows:

ki ( x) =
c ( x) i p + 1

cC i

1
x
i = P +1
k i ( x ) = 2
w x w

(8)

1
c =
+ 1

(9)

if

xr = x s

if

xr x s

where C i is the set of all cliques of type i in a w w .


Figure 1 shows the 4 clique types of the Ising model of
order 2.

In the case of a Gaussian distribution, the iterative EM


algorithm for density function parameter estimation is
given by:

kj

(xi ) =

(jk ) f j xi \ (jk )
L

l =1

(jk +1) =
( k + 1)

[( ) ](
j

2 k +1)

l(k ) f l xi \ l(k )

1
N

(4)

(jk )(xi ) xi

(6)

N (jk +1) i =1
1

N (k +1)
j

( ) (x )[x
N

k
j

i=1

s
i=2

i=3

i=4

(5)

i =1

s
s

Fig. 1. The 4 clique types of the


Ising second order model

( ) (x )
k
j

(jk +1)

(7)

i =1

were kj (xi ) is a an intermediate function. The estimated


parameters are obtained by substituting (4) in (5),(6),(7).
3. OUTLINE OF THE PROPOSED METHOD
The proposed algorithm consists on two steps (fig. 2). The
first is the computation of the feature vector for every
pixel and then, applying the EM algorithm to estimate the
mean and the variance of features for every texture in the
image. For this purpose the number of classes is supposed
known. At last a Bayesian classification rule is applied to
attribute a label for each pixel by defining a likelihood
function witch computes the probability for a given pixel
to belong to a given class.

The main reason


K ( x ) = (k1 ( x),..., k p +1 ( x))

of
is

choosing the vector


to provide a good

discrimination of textures by using the first p elements as


Markov features while the (p+1)th provides an indication
of the average grey of the texture. Strictly speaking, the
features of the Markov model are the Bi (x ) elements
witch are generally used related work [7,8,9]. In this case
the algorithm has to estimate the Bi (x ) by an optimisation
process. This task must be carried out before the labelling
step. Therefore, by using the ki (x) , we avoid the
estimation step of the Markov features because the ki (x)
are computed directly from the image. Furthermore, the
clustering of the pixels in the B space corresponds to its
clustering in the K space in the case of using a small
analysis window size. The last element of the vector
corresponds to the mean grey level of the surrounding
pixels of the window around the pixel at hand. This adds
another level of differentiation of textures in the clustering
process as in the case of 2 different textures having the
same MRF features. This is because it has been proved

that different textures can have the same MRF features but
can easily be discriminated by human eyes. Furthermore,
this last features will allow the segmentation of both
textured and non-textured images.

k 2 ( x)

k p+1 ( x)
EM
Step

j = j ,1 ,..., j , L

Bayesian Classification

Fig. 2. Scheme
of the method.

3.2. The EM algorithm and Pixel Labeling


From the original image, the feature vector K ( x i ) is
computed for each pixel x i . Then the EM algorithm is
applied for each element K j ( x i ) ( 1 < j < p + 1 ) by

considering all data {x i ,1 < i < N } where N is the number


of pixels. After each EM step and for every K j an

j , l = { j , l , j , l , j , l } is computed for

1 < l < L where L is the number of classes witch is


supposed known and represents the number of textures in
the image. A validation procedure is also added to aid the
EM process. This is achieved by terminating the
procedure if the difference between the estimated
parameters of two consecutive EM steps is inferior to a
fixed threshold .
After estimating the Gaussian parameters which
correspond to each region a labelisation process is
required in order to attribute a label to each pixel. This is
carried out by using a bayesian classification method. To
this end, we define a likelihood function as follows:
f (x i \ l ) =

P +1

f
j =1

where:

2 j , l

4.

= { 1 ,..., P + 1 }

estimation of

l Pj

(x i \ l )

k j ( xi ) j ,l
exp
2( j , l ) 2

L( xi ) = arg max l ( f ( xi \ l ) )

EM
Step

Validation Procedure

Label
Image

(11)

)2

(12)

For a pixel xi the likelihood probability to belong to a


region l is given by (2). The label L is attributed for the
class giving maximum likelihood function as follows:

Features Extraction

EM
Step

l P j (x i \ l )
L

l =1

Pj (xi \ l ) =

Grey
Level
Image

k1 ( x)

f j (x i \ l ) =

(x i \ l )

(10)

(13)

RESULTS

The algorithm has been tested on a number of 256 256


grey level images. Due to its simplicity, the second order
Ising model has been used for computing the clique
functions. Consequently, the number of clique type p=4.
Preliminary analysis has been carried out for different
window sizes (w=4,5, .. 32) and it was found that the case
w=8 provided the best trade off in terms of accuracy and
computation time. The method has been tested on a large
number of various images including synthetic and natural
textures. The synthetic textures were generated using the
Gibbs sampler [8].
To gauge the effectiveness of the algorithm, a
comparison with another MRF based method has been
performed. This method, called Selectionist Relaxation
(SR)[10], is similar to our proposed method in a way that
it uses a vector of features and the second order Ising
model. It also operates by estimating the texture vector
features by maximising a likelihood function. An
evolutionary approach based on a distributed genetic
algorithm has been used for both optimisation and label
attribution. Although, this method showed good results it
appeared that is was not robust (this is due to the random
nature of genetic algorithm).
Fig. 3 shows an example of synthetic textured image
containing 3 different textures generated by the Gibbs
simpler for different values of the vector Bi (x ) . The
textures are spatially arranged according to a simple
geometry. It can be seen that the different regions are
clearly segmented using our technique despite the
interference and the thinness of some regions whereas the
SR method fails to do so. Fig. 4 shows an other example
with segmented natural textures picked from the wellknown Bordatz texture album. The proposed algorithm
has segmented the image with an good accuracy after only
7 EM steps while the SR has failed even by using a large
number of genetic generations.

(a)

(b)

(b)

(a)

Fig. 3.

Fig. 4.

(a) Original
textured
image.
(b) Image treated by the
proposed algorithm.
(c) Image treated by
SR.

(a) Original
textured
image.
(b) Image treated by the
proposed algorithm.
(c) Image treated by
SR.
(c)

(c)
5.

CONCLUSION

This paper has presented a novel algorithm for


unsupervised segmentation of textured images. The main
contribution is on applying a parallel EM algorithm on
vector data and defining a likelihood function for a robust
bayesian classification. The method presented has shown
good behaviour for both synthetic and natural textures
however the complexity of some natural textures cannot
be efficiently captured by the simple second order model.
This can be achieved by using higher order models such
as the fifth order model which should provide better
results at the expense of computing 12 cliques types and
then, the use of a larger vector size.
6. REFERENCES
[1] R. C. Dubles and A. K. Jain, "Random Field Models in
image analysis," J. Applied Statistics, vol. 16, no. 2,pp.
131-164, 1989.
[2] B. S. Manjunath and R. Chellappa, "Unsupervised
Texture segmentation Using Markov Random Field
Models," IEEE trans. Pattern analysis and machine
intelligence, vol. 13, no. 5, pp. 478-482, 1991.
[3] C. Bouman and B. Liu, "Multiple Resolution
Segmentation of Textured Images," IEEE Trans on
Pattern Analysis and Machine intelligence, vol 13, pp 99113, 1991.
[4] S. S. Gopal and J. Hebet, "Bayesian Pixel
Classification using Spatially Variant Finite Mixures and

Generalised EM Algorithm," IEEE Trans. on Image


Processing, vol 7, pp 1014-1028, 1998.
[5] J. K Fwu and P. M. Djuri, "EM Algorithm Initialised
by a Tree Structure Scheme," IEEE Trans. on Image
Processing, vol. 6, 1997.
[6] A. Dempster, N. Laird and D. Rubin, "Maximum
likelihood from incomplete data via the EM algorithm,"
Journal of the Royal Statistical Society, 39 (Series B),
1977.
[7] G. L. Gimel'farb and A. V. Zalesny, "Probabilistic
Models of Digital Region Maps Based on markov
Random Fields with Short and Long-Range Interaction,"
Pattern Recognition Letters, vol. 14, pp. 789-797, 1993.
[8] S. Geman and D. Geman, "Stochastic relaxation Gibbs
Distribution and Bayesian Restoration of Images," IEEE
Trans. Pattern Analysis and Machine Intelligence,.vol. 6,
pp. 1039-1049, 1990.
[9] H. Derin and H. Elliott, "Modeling and Segmentation
of Noisy and Textured Images Using Gibbs Random
Fields," IEEE Trans on Pattern Analysis and Machine
Intelligence, vol. 9, pp. 39-55, 1987.
[10] P. Andrey and P. Taroux, "Unsupervised
Segmentation of Markov Random Field Modeled
Textured Images Using Selectionist Relaxation," IEEE
Trans. on Pattern Analysis and Machine Intelligence, vol.
20, pp. 252-262. 1998.

Potrebbero piacerti anche