0 Voti positivi0 Voti negativi

9 visualizzazioni4 pagineImage Texture Segmentation

Sep 14, 2015

© © All Rights Reserved

PDF, TXT o leggi online da Scribd

Image Texture Segmentation

© All Rights Reserved

9 visualizzazioni

Image Texture Segmentation

© All Rights Reserved

- Volume 2- conference-ICCS-X
- Rough Set based Natural Image Segmentation under Game Theory Framework
- 06376234
- 1476-069X-6-34-S1
- Detection of Knee Osteoarthritis by Measuring the Joint Space Width in Knee X-ray Images
- Correction
- Application of the Composite Materials in Industries
- Download
- A New Algorithm for Improving Basic Model Based Foreground Detection Using Neutrosophic Similarity Score
- Computer Vision
- Seto02-IJRS
- P03_42
- ISEF 2 Autonomous Car Doc Particle
- Fungus Diseases Detection Using Image Processing
- Image Processing
- Image Processing
- Species_and_variety_detection_of_fruits_and_vegetables_from_images.pdf
- 1703.07511
- Image Processing Analysis and and Machine Vision 3
- Music Analysis Using Hidden Markov Mixture Models

Sei sulla pagina 1di 4

School of Computer Science

The Queen's Univeristy of Belfast

Belfast BT7 1NN

Northern Ireland, United Kingdom

Email: M.Roula@qub.ac.uk

ABSTRACT

Image texture segmentation is an important problem and

occurs frequently in many image processing applications.

Although, a number of algorithms exist in the literature.

Methods that rely on the use of Expectation-Maximisation

algorithm are gaining a growing interest. The main feature

of this algorithm is that it is capable of estimating the

parameters of mixture distribution. This paper presents a

novel unsupervised algorithm based on ExpectationMaximisation algorithm where the analysis is applied on

vector data rather than the grey level. This is achieved by

defining a likelihood function witch measures how the

estimated features are fitting the present data.

Experimental results on images containing various

synthetic and natural textures have been carried out and a

comparison with existing and similar techniques has

shown the superiority of the proposed method.

1. INTRODUCTION

Image segmentation is the process of partitioning an

image into homogenous regions. This task becomes

particularly difficult in the case of textured images. The

existing segmentation methods are commonly classified

according to the texture description. In the stochastic

based methods, textures are assumed to be a realisation of

a two-dimensional random field [2].

Several Methods based on Markov Random Fields

(MRFs) have been developed. In [2] the image pixel

intensities are modelled as a Gauss-Markov random field.

The parameters are estimated by clustering pixels' data

into no-overlapping regions of uniform texture. In these

methods, the crucial problem is the parameter estimation

and the regions label attribution. In [3], Bouman and Liu

have used simulated annealing to maximise a posterior

function by modeling textures as a causal nonhomogeneous Gaussian autoregressive random field.

Recently the finite mixture model has attracted a

substantial interest for image segmentation [4]. In this

incomplete data set from a complete unknown data witch

is a mixture of several classes. The status of each pixel is

unknown and must be identified by the segmentation

process. An iterative Maximum-Likelihood (ML)

estimation scheme well known as the EM (ExpectationMaximisation) algorithm has been successfully used in all

those methods. But it is widely recognised that the EM

algorithm is computationally heavy to use and requires

good initial conditions for reliable performance [5].

Moreover, all the previous methods dealt with the case of

grey level as a mixture data. For textured images in

particular, it is very difficult to discriminate between

classes using first order histogram only. It will be shown

that the use of vector data inspired from the MRF model

can provide significant results and a fast convergence for

textured images and without any special constraints for

the initial conditions.

The rest of the paper is organised as follows: a brief

description of the gaussian classical mixture model and its

EM algorithm is given in section 2. The vector feature

extraction and how the EM algorithm is applied on

textured data are described in section 3. Section 4 is

concerned with a discussion of the experimental results

obtained while Section 5 gives a summary of the work.

2. THE CLASSICAL MIXTURE MODEL

The Classical Mixture Model (CMM) [6] can be defined

as follows: suppose xi is the ith observation of the random

variable X:, 1 i N where N is the number of

observation. Let f j x \ j ,1 j L be a set of L density

{ (

The density function of the random variable x is

modeled as a weighted sum of L density functions as:

f x \

) = j f j (x \ j )

j =1

(1)

is to find out the set of and that maximises the

likelihood function P(x) with regards to the given data xi .

N

P( x) =

j f j (xi \ j )

(2)

i =1 j =1

set j consists of both the mean and the variance as

follows:

xi j 2

exp

(3)

f j xi \ j =

2( j ) 2

2 j

complete data y. The EM algorithm is an iterative

algorithm consists by using an initial estimate ( 0) before

1: Estimation step to find the function

Q( \ ( p ) ) = E log f ( y \ ) \ x, ( p ) .

2: Maximisation step Find

( p +1) = arg max Q( \ ( p ) ) .

The features used to discriminate textures are represented

by a vector K of (p+1) dimensions. The first p elements

correspond to the p cliques type used in the generalised

Ising model [7] while the (P+1)th element is the average

grey level of the neighbourhood pixels surrounding the

pixel at hand. For a given window size w, these elements

are computed as follows:

ki ( x) =

c ( x) i p + 1

cC i

1

x

i = P +1

k i ( x ) = 2

w x w

(8)

1

c =

+ 1

(9)

if

xr = x s

if

xr x s

Figure 1 shows the 4 clique types of the Ising model of

order 2.

algorithm for density function parameter estimation is

given by:

kj

(xi ) =

(jk ) f j xi \ (jk )

L

l =1

(jk +1) =

( k + 1)

[( ) ](

j

2 k +1)

l(k ) f l xi \ l(k )

1

N

(4)

(jk )(xi ) xi

(6)

N (jk +1) i =1

1

N (k +1)

j

( ) (x )[x

N

k

j

i=1

s

i=2

i=3

i=4

(5)

i =1

s

s

Ising second order model

( ) (x )

k

j

(jk +1)

(7)

i =1

parameters are obtained by substituting (4) in (5),(6),(7).

3. OUTLINE OF THE PROPOSED METHOD

The proposed algorithm consists on two steps (fig. 2). The

first is the computation of the feature vector for every

pixel and then, applying the EM algorithm to estimate the

mean and the variance of features for every texture in the

image. For this purpose the number of classes is supposed

known. At last a Bayesian classification rule is applied to

attribute a label for each pixel by defining a likelihood

function witch computes the probability for a given pixel

to belong to a given class.

K ( x ) = (k1 ( x),..., k p +1 ( x))

of

is

to provide a good

Markov features while the (p+1)th provides an indication

of the average grey of the texture. Strictly speaking, the

features of the Markov model are the Bi (x ) elements

witch are generally used related work [7,8,9]. In this case

the algorithm has to estimate the Bi (x ) by an optimisation

process. This task must be carried out before the labelling

step. Therefore, by using the ki (x) , we avoid the

estimation step of the Markov features because the ki (x)

are computed directly from the image. Furthermore, the

clustering of the pixels in the B space corresponds to its

clustering in the K space in the case of using a small

analysis window size. The last element of the vector

corresponds to the mean grey level of the surrounding

pixels of the window around the pixel at hand. This adds

another level of differentiation of textures in the clustering

process as in the case of 2 different textures having the

same MRF features. This is because it has been proved

that different textures can have the same MRF features but

can easily be discriminated by human eyes. Furthermore,

this last features will allow the segmentation of both

textured and non-textured images.

k 2 ( x)

k p+1 ( x)

EM

Step

j = j ,1 ,..., j , L

Bayesian Classification

Fig. 2. Scheme

of the method.

From the original image, the feature vector K ( x i ) is

computed for each pixel x i . Then the EM algorithm is

applied for each element K j ( x i ) ( 1 < j < p + 1 ) by

of pixels. After each EM step and for every K j an

j , l = { j , l , j , l , j , l } is computed for

supposed known and represents the number of textures in

the image. A validation procedure is also added to aid the

EM process. This is achieved by terminating the

procedure if the difference between the estimated

parameters of two consecutive EM steps is inferior to a

fixed threshold .

After estimating the Gaussian parameters which

correspond to each region a labelisation process is

required in order to attribute a label to each pixel. This is

carried out by using a bayesian classification method. To

this end, we define a likelihood function as follows:

f (x i \ l ) =

P +1

f

j =1

where:

2 j , l

4.

= { 1 ,..., P + 1 }

estimation of

l Pj

(x i \ l )

k j ( xi ) j ,l

exp

2( j , l ) 2

L( xi ) = arg max l ( f ( xi \ l ) )

EM

Step

Validation Procedure

Label

Image

(11)

)2

(12)

region l is given by (2). The label L is attributed for the

class giving maximum likelihood function as follows:

Features Extraction

EM

Step

l P j (x i \ l )

L

l =1

Pj (xi \ l ) =

Grey

Level

Image

k1 ( x)

f j (x i \ l ) =

(x i \ l )

(10)

(13)

RESULTS

grey level images. Due to its simplicity, the second order

Ising model has been used for computing the clique

functions. Consequently, the number of clique type p=4.

Preliminary analysis has been carried out for different

window sizes (w=4,5, .. 32) and it was found that the case

w=8 provided the best trade off in terms of accuracy and

computation time. The method has been tested on a large

number of various images including synthetic and natural

textures. The synthetic textures were generated using the

Gibbs sampler [8].

To gauge the effectiveness of the algorithm, a

comparison with another MRF based method has been

performed. This method, called Selectionist Relaxation

(SR)[10], is similar to our proposed method in a way that

it uses a vector of features and the second order Ising

model. It also operates by estimating the texture vector

features by maximising a likelihood function. An

evolutionary approach based on a distributed genetic

algorithm has been used for both optimisation and label

attribution. Although, this method showed good results it

appeared that is was not robust (this is due to the random

nature of genetic algorithm).

Fig. 3 shows an example of synthetic textured image

containing 3 different textures generated by the Gibbs

simpler for different values of the vector Bi (x ) . The

textures are spatially arranged according to a simple

geometry. It can be seen that the different regions are

clearly segmented using our technique despite the

interference and the thinness of some regions whereas the

SR method fails to do so. Fig. 4 shows an other example

with segmented natural textures picked from the wellknown Bordatz texture album. The proposed algorithm

has segmented the image with an good accuracy after only

7 EM steps while the SR has failed even by using a large

number of genetic generations.

(a)

(b)

(b)

(a)

Fig. 3.

Fig. 4.

(a) Original

textured

image.

(b) Image treated by the

proposed algorithm.

(c) Image treated by

SR.

(a) Original

textured

image.

(b) Image treated by the

proposed algorithm.

(c) Image treated by

SR.

(c)

(c)

5.

CONCLUSION

unsupervised segmentation of textured images. The main

contribution is on applying a parallel EM algorithm on

vector data and defining a likelihood function for a robust

bayesian classification. The method presented has shown

good behaviour for both synthetic and natural textures

however the complexity of some natural textures cannot

be efficiently captured by the simple second order model.

This can be achieved by using higher order models such

as the fifth order model which should provide better

results at the expense of computing 12 cliques types and

then, the use of a larger vector size.

6. REFERENCES

[1] R. C. Dubles and A. K. Jain, "Random Field Models in

image analysis," J. Applied Statistics, vol. 16, no. 2,pp.

131-164, 1989.

[2] B. S. Manjunath and R. Chellappa, "Unsupervised

Texture segmentation Using Markov Random Field

Models," IEEE trans. Pattern analysis and machine

intelligence, vol. 13, no. 5, pp. 478-482, 1991.

[3] C. Bouman and B. Liu, "Multiple Resolution

Segmentation of Textured Images," IEEE Trans on

Pattern Analysis and Machine intelligence, vol 13, pp 99113, 1991.

[4] S. S. Gopal and J. Hebet, "Bayesian Pixel

Classification using Spatially Variant Finite Mixures and

Processing, vol 7, pp 1014-1028, 1998.

[5] J. K Fwu and P. M. Djuri, "EM Algorithm Initialised

by a Tree Structure Scheme," IEEE Trans. on Image

Processing, vol. 6, 1997.

[6] A. Dempster, N. Laird and D. Rubin, "Maximum

likelihood from incomplete data via the EM algorithm,"

Journal of the Royal Statistical Society, 39 (Series B),

1977.

[7] G. L. Gimel'farb and A. V. Zalesny, "Probabilistic

Models of Digital Region Maps Based on markov

Random Fields with Short and Long-Range Interaction,"

Pattern Recognition Letters, vol. 14, pp. 789-797, 1993.

[8] S. Geman and D. Geman, "Stochastic relaxation Gibbs

Distribution and Bayesian Restoration of Images," IEEE

Trans. Pattern Analysis and Machine Intelligence,.vol. 6,

pp. 1039-1049, 1990.

[9] H. Derin and H. Elliott, "Modeling and Segmentation

of Noisy and Textured Images Using Gibbs Random

Fields," IEEE Trans on Pattern Analysis and Machine

Intelligence, vol. 9, pp. 39-55, 1987.

[10] P. Andrey and P. Taroux, "Unsupervised

Segmentation of Markov Random Field Modeled

Textured Images Using Selectionist Relaxation," IEEE

Trans. on Pattern Analysis and Machine Intelligence, vol.

20, pp. 252-262. 1998.

- Volume 2- conference-ICCS-XCaricato daSameer Al-Subh
- Rough Set based Natural Image Segmentation under Game Theory FrameworkCaricato daInternational Journal for Scientific Research and Development - IJSRD
- 06376234Caricato dasatheeshkotha15
- 1476-069X-6-34-S1Caricato daMadhu Evuri
- Detection of Knee Osteoarthritis by Measuring the Joint Space Width in Knee X-ray ImagesCaricato daAnonymous vQrJlEN
- CorrectionCaricato daDeepa Ramesh
- Application of the Composite Materials in IndustriesCaricato dachethugowda7
- DownloadCaricato davietbinhdinh
- A New Algorithm for Improving Basic Model Based Foreground Detection Using Neutrosophic Similarity ScoreCaricato daMia Amalia
- Computer VisionCaricato daMateus Coimbra
- Seto02-IJRSCaricato dachristine_urban7296
- P03_42Caricato dajimakosjp
- ISEF 2 Autonomous Car Doc ParticleCaricato daNazir Kazimi
- Fungus Diseases Detection Using Image ProcessingCaricato daAnonymous 1aqlkZ
- Image ProcessingCaricato dasmile0life
- Image ProcessingCaricato daYogesh Yadav
- Species_and_variety_detection_of_fruits_and_vegetables_from_images.pdfCaricato daWahyu Ari Hargiyanto
- 1703.07511Caricato daeariasz69
- Image Processing Analysis and and Machine Vision 3Caricato daAthulya P D
- Music Analysis Using Hidden Markov Mixture ModelsCaricato daEduardo Moguillansky
- Salient Region Detection and Segmentation (ICVS 2008).pdfCaricato daaaksial
- 1-s2.0-S0924271610000961-mainCaricato dasupersian11
- V2I600165Caricato daThượng Khanh Trần
- Performance Analysis of Vehicle Number Plate Recognition System Usingtemplate Matching Techniques 2165 7866 1000232.PdCaricato daArchith Dhoni
- Location Detection Over Social MediaCaricato daAnonymous qqxHaO
- 214_metzCaricato daGigel Pitic
- Image ProcessingCaricato daJournalNX - a Multidisciplinary Peer Reviewed Journal
- lca.pdfCaricato damymydestiny
- Kmeans AlgorithmCaricato daChhanda Sarkar
- Factors for ReturnsCaricato dasamir22sjj

- Focal Length Cheat SheetCaricato daRazvan Scarlat
- Photography Night Sky a Field Guide for Shooting After Dark - Jennifer WuCaricato daRazvan Scarlat
- World Apparel Fiber Consumption (FAO, 2013)Caricato daFilipe Stona
- Take Better Travel Photos 5Caricato daRazvan Scarlat
- Take Better Travel Photos 2Caricato daRazvan Scarlat
- Photoshop Absolute Beginners GuideCaricato daRazvan Scarlat
- Adobe Photoshop CC Level 1Caricato daRazvan Scarlat
- A Technical Guide to Advance Ph - Elizabeth FrasersCaricato daRazvan Scarlat
- Nighttime by Brad DavisCaricato daRazvan Scarlat
- Photography For Beginners.pdfCaricato daRazvan Scarlat
- Photoshop_ the Complete Beginne - Connor GriffinCaricato daRazvan Scarlat
- Bahamas ContentsCaricato dahienlt33
- Practical Photography - Take Better Travel PhotosCaricato daunardelean
- Get More From Your Digital SLRCaricato dakunal_deshpande_6
- Limba engleza - At the hotelCaricato daRazvan Scarlat
- Limba engleza - in the kitchenCaricato daRazvan Scarlat
- Limba engleza - At the airportCaricato daRazvan Scarlat
- Limba engleza - At the stationCaricato daRazvan Scarlat
- Engleza pagina 031-031Caricato daRazvan Scarlat
- Engleza pagina 010-011Caricato daRazvan Scarlat
- Romanian BookletCaricato daȚuluca Aurelia
- CultureShock! AustriaCaricato daRazvan Scarlat
- central-greece_v1_m56577569830517596Caricato daRazvan Scarlat
- Camera Filters Cheat Sheet1 (2)Caricato daRazvan Scarlat
- Camera Filters Cheat Sheet1Caricato daRazvan Scarlat
- Digital FiltersCaricato daRazvan Scarlat
- Sunny 16 RuleCaricato daRazvan Scarlat

- traitement d'imageCaricato daMohcin Sarrar
- Hand Gesture Recognition: A Literature ReviewCaricato daAdam Hansen
- gmmHmmTutoChief_wissap09Caricato daGil Dobry
- Automatic differentiation variational inferenceCaricato daPeter
- A Bayesian Approach for Shadow Extraction From a Single ImageCaricato daavenky
- Fluent Tut 22Caricato daLUISALBERTO06011985
- Stats notesCaricato daKasem Ahmed
- Diagnostics of Stator Faults of the Single-phase Induction Motor Using Thermal Images, MoASoS and Selected ClassifiersCaricato daIsrael Zamudio
- Speech Recognition 1.pdfCaricato daJivesh Quereshi
- 06600965Caricato daMahesh Patil
- Zeestraten-RAL2017Caricato daDeborah Velazquez
- thesis_mns25.pdfCaricato daRadhianie Djan
- Qin Jin - Robust Speaker RecognitionCaricato daTung Thanh Vu
- Yan Tong Liu 2013Caricato dalbenitesanchez
- 02. 2014 Outlier Detection for Temporal Data - A SurveyCaricato daRaven Smith
- A Survey of Audio-Based Music Classification and AnnotationCaricato daWilly Cornelissen
- 2004 PhLic Kinnunen TomiCaricato daAnamariaCarlaVass
- statistical_toolbox_manual.pdfCaricato daLyly Magnan
- [Charu C. Aggarwal] Neural Networks and Deep Learn(Z-lib.org)Caricato daRavi Ranjan
- CS583 Chapter 4 Supervised LearningCaricato daKunal Deore
- 968Caricato datufan85
- Vol_mixCaricato daسامر الذهب
- - Developing Speech Recognition System for Quranic Verse Recitation Learning SoftwareCaricato daUniversitas Malikussaleh
- ArticleCaricato daMarco Zárate
- mml-book-2.pdfCaricato daAndrés Torres Rivera
- 1502.04383Caricato daAshishKumar
- ReportCaricato dasyahroel778583
- Log Analysis-Based Intrusion Detection viaCaricato daAritra Kundu Kolkata
- An Approach for classification in detecting tumor in Brain MRI images using GMM and Neural Network classifierCaricato daIRJCS-INTERNATIONAL RESEARCH JOURNAL OF COMPUTER SCIENCE
- IOSRCaricato daFidel Souza

## Molto più che documenti.

Scopri tutto ciò che Scribd ha da offrire, inclusi libri e audiolibri dei maggiori editori.

Annulla in qualsiasi momento.