Sei sulla pagina 1di 8

PCA-BASED NOISE REMOVAL TECHNIQUES USING THE

INFORMATION EXTRACTED FROM NOISY EXAMPLES


Luminita State1
Catalina Cocianu2
Panayiotis Vlamos3
Viorica Stefanescu4
Abstract
The research reported in the paper aims the development of a methodology for noise removal in image
restoration. The noise removal technique is based on a transform the information extracted from a
series of noisy images. A variant of CSPCA noise removal algorithm previously developed by the
same authors is considered yielding to an adaptive learning technique. A series of comments
concerning the experimental results are presented in the final section of the paper.
Keywords: PCA, wide sense stationary stochastic process, noise removal, image compression, image
reconstruction
1. Introduction
An image may be degraded because the gray values of individual pixels may be altered, or it may be
distorted because the position of individual pixels may be shifted away from their correct position. The
second case is the subject of geometric restoration.
Noise is any undesired information that contaminates an image and appears in images from a variety
of sources. The digital image acquisition process, which converts an optical image into a continuous
electrical signal that is then sampled, is the primary process by which noise appears in digital images.
The advantages of using principal components stem mainly from the following facts. On one hand, the
information conveyed by each band is maximal for the number of bits used because the bands are
uncorrelated and no information contained in one band can be predicted by the knowledge of the other
bands.
The image restoration tasks mainly correspond to the process of finding an approximation to the
overall degradation process and finding the appropriate inverse process to estimate the original
unknown image.
Typically, the noise can be modeled by Gaussian, uniform or salt and pepper distribution. The
Gaussian model is most often used to model natural noise processes, such as those occurring from
electronic noise in the image acquisition system. The salt and pepper type noise is typically caused by
1

Dept. of Computer Science, University of Pitesti, Pitesti, ROMANIA, radus@sunu.rnc.ro


Caderea Bastiliei #45, Bucuresti 1, ROMANIA
2

Dept. of Computer Science, Academy of Economic Studies, Bucharest, ROMANIA,


ccocianu@ase.ro, Calea Dorobantilor #15-17, Bucuresti 1, ROMANIA
3
4

Ionian University , Corfu, GREECE, vlamos@vlamos.com

Dept. of Mathematics, Academy of Economic Studies, Bucharest, ROMANIA


Calea Dorobantilor #15-17, Bucuresti 1, ROMANIA

malfunctioning pixel elements in the camera sensors, faulty memory locations, or timing errors in the
digitization process.
Uniform noise is useful because it can be use to generate any other type of noise distribution and is
often used to degrade images for the evaluation of image restoration algorithms because it provides the
most unbiased or neutral noise model. In addition to the Gaussian, other noise models based on
exponential distributions are useful for modeling noise in certain types of digital images as, for
instance, Rayleigh distribution is currently used to model the radar range and velocity images
contaminated by noise.
The image restoration tasks mainly correspond to the process of finding an approximation to the
overall degradation process and finding the appropriate inverse process to estimate the original
unknown image.
The research reported in the paper aims the development of a methodology for noise removal in image
restoration.
The noise removal technique is based on a transform the information extracted from a series of noisy
images. A variant of CSPCA noise removal algorithm previously developed by the same authors is
adapted yielding to an adaptive learning technique.
2. The CSPCA Noise Removal Algorithm
The working assumptions of our model can be briefly described as follows (State, 2005). The original
image X 0 is modeled as a sample of a wide sense stationary stochastic process and that the first and

second order moments = E (X 0 (t )) and = Cov X 0 (t ), (X 0 (t ))

) are known. We consider that

the noise = ( t , t 0 ) is represented as a wide sense stationary stochastic process, where, for any

t0, t is distributed N 0, 2 I n . The resulted image version is,

X = X0 + .

(1)
The experimental results are performed on data represented by monochrome images decomposed in
blocks of 8 8 size and liniarization. The data are preprocessed in order to get a normalized
representation and centering process and the normalization resulted by the preprocessing step enables
the hypotheses that 0 < 2 < 1 .
The aim is to obtain an approximation of X0 from the input X using the knowledge , and 2 .
For the centered data,

Y = X E(X ) = X 0 +

(2)

using straightforward computations we get

Cov Y, Y T = + 2 I n .

(3)

1
2 , where is the orthonormal matrix having as columns the unit eigen vectors of

Let A =
and = diag(1 , 2 ,..., n ) is the diagonal matrix whose diagonal entries are given by,

2
i
where 1 , 2 ,..., n are the eigen values of .
i = 1 +

(4)

Then, the columns of A are eigen vectors of the matrix 1 + 2 I n and their corresponding eigen
values are 1 , 2 ,..., n .
Let Z be given by the linear transform of matrix A T . Then

Z = A T Y = A T (X 0 ) + A T

and,

(5)

T
Cov A T , A T = 2 1

(6)

Since t is distributed N 0, 2 I n , the vector A T is distributed N 0, 2 1 and consequently


the components of the transformed noise A T are independent.
Using the shrinkage function

g (u ) sign (u ) max 0, u 2
,

(7)

applied to the pixels of the image Z, we get a good approximation, Z 0 of


Z0= A T (X 0 ) .
Because AAT=-1,

(8)

~
= + AZ
X
0
0

(9)

can be taken as a restored version of X0.


We combine the above described restoration scheme with a compression/decompression scheme,
where the noise removal process is developed in the feature space. Let 1 , 2 ,..., n be the unit
eigen vectors of and 1 2 ... n their corresponding eigen values. For any 1 m n , we

denote by = (1 , 2 ,..., m ) and let m = diag (1 , 2 ,..., m ) . The LMS optimal


compression/decompression scheme is given by the following diagram.
m

( )

m T

T
F = m Y

( )
~
Y

( )

where Y = m m Y .
We include in the m-dimensional feature space of F a noise removal module implementing the CSPCA
algorithm, that is, we consider the processing steps described by the following diagrams.
1. compression step
1

( )

= X
Y

( m ) 2 m

2. noise removal step using CSPCA


F0
CSPCA

3. decompression step

F0

( m )

1
2

( )

m T

( )Y

F = ( m ) 2 m

,
X

is the restored image,


where X

( )

1
= ( ) 2 m
X
m

F0 .

(10)

3. The Modified CSPCA to Provide Generalization Properties


The aim of this section is to propose a modified version of CSPCA based entirely on examples and to
investigate its generalization capacities.
Let X1 , X 2 ,..., X N be a series of n-dimensional noisy images. The sample covariance matrix is

1 N
1 N
T
(
)(
)
,
where

i N i N
Xi .
N
N 1 i =1
N i =1
We denote by 1N 2N ... nN the eigen values and by 1N ,..., nN the corresponding
N =

orthonormal eigen vectors of N .


If XN+1 is a new noisy image, then, for the series X1 , X 2 ,..., X N , X N +1 , we get

N +1 = N +

1
(X N +1 N )(X N +1 N )T 1 N
N +1
N

(11)

Lemma. In case that the eigen values of N are pairwise distinct, the following first order
approximations hold,

iN +1 = iN + ( iN ) N iN = ( iN ) N +1 iN
T

N +1
i

= +
N
i

(12)

( )

j T
N
N
i

N iN N
j .
Nj

j =1
j i

Proof Using the perturbation theory, we get


Let N +1 = N + N and, for any 1 i n , iN +1 = iN + iN , iN +1 = iN + iN .
Then,

1
(X N +1 N )(X N +1 N )T 1 N
N +1
N
N
N
N
N
N
( N + N )( i + i ) = (i + i )( i + iN ).
N =

(13)
(14)

Using first order approximations, from (14) we get,

iN iN + N iN + N iN iN iN + iN iN + iN iN
hence,

( )

N T
i

( )

N iN + iN

( )

N iN iN iN

( ) = ( )
( ) + ( )
T

Using iN iN

iN

hence

N T
i

N T
i

N
i

( )

iN = iN

N T
i

iN + iN iN

(15)
2

(16)

we obtain ,

iN iN ( iN ) iN + iN ,
T

(17)

N iN that is,

iN +1 = iN + ( iN ) N iN = ( iN ) N +1 iN
T

(18)

The first order approximations of the orthonormal eigen vectors of N +1 can be derived using the
expansion of each vector iN in the basis represented by the orthonormal eigen vectors of N ,

iN = bi , j Nj ,

(19)

j =1

where

( )
T

bi , j = Nj

N
i

(20)

Using the orthonormality, we get,


2

1 = iN + iN

( )

bi ,i = iN

iN

+ 2 iN iN = 1 + 2 iN iN , that is

iN =0

(21)

Using (14), the approximation,

N iN + N iN iN iN + iN iN .
holds for each 1 i n .
For 1 j i n , from (22) we obtain the following equations,

( )
( )
( )

( )
+ ( )
+ ( )

j T
N

iN + Nj

j T
N

iN

N
j

j
N

j T
N

iN

N
i

)( )

Nj Nj

( )

bi , j =

N T
j

N
i

j T
N

N
i

N
i

iN iN Nj

iN iN

iN iN

( )
( )
=

iN = Nj
N
i

( )
( )
( )

j
N

From (25) we get,

j T
N
N
i

N
N
j

j
N

(22)

( )

+ iN Nj

iN

(23)
(24)

iN

N
i

(25)
(26)
(27)

Consequently, the first order approximation of the eigen vectors of N +1 are,


n

+ = +
N
i

N
i

N
i

j =1
j i

( )

j T
N
N
i

N iN N
j .
Nj

(28)

The resulted restoring image algorithm is described as follows.


Initialization step: A set of N noisy images, X1 , X 2 ,..., X N .
N
N
Step 1. Compute N , N , i , i , 1 i n .

Step 2. Get a new noisy image XN+1 ; add XN+1 to the image database.
Step 3. Compute the new system memory N +1, N +1 , iN +1 , iN +1 using the first order
approximations.
Step 4. Compute the variant of XN+1 using CSPCA in terms of N +1, N +1 , iN +1 , iN +1 .
Step 5. In case that there are more images, go to Step 2.
4. Comments and Experimental Results
The main aim of the tests was to investigate the generalization capacities of the algorithm proposed in
Section 3. The tests were performed on data represented by monochrome images decomposed in
blocks of 8 8 size and linearization. The data were preprocessed in order to get normalized, centered
representations.
Most of the tests were performed on samples of volume 20. The images of each series shared the same
statistical properties. Some of the results are presented bellow. In figure 1 only 12 of the input samples
are presented and their clean variants are depicted in figure 2. New noisy images were tested against

the memory represented by the components determined according to the mentioned methodology.
Figure 3 represents the initial noisy image and its cleaned version respectively.
Further work is going to be performed toward improving the performance of the proposed algorithm.
Besides more tests will be performed on larger samples.

Figure 1. A set of noisy samples

Figure 2. The corresponding results obtained using the CSPCA

Figure 3. The image obtained using the proposed noise removal algorithm. The left part
represents the noisy image and the right one is its restored version
REFERENCES
Bannour, S., & Azimi-Sadjadi, M.R. (1995). Principal Component Extraction Using Recursive Least
Squares Learning, IEEE Transaction on Neural Networks, vol.6,no.2
Baxes, G.A.(1994). Digital, Image Processind: Principles and Applications, New York: Wiley
Castelman, K.R. (1996). Digital Image Processing, Englewood Cliffs, N.J: Prentice Hall
Chatterjee, C., Roychowdhury, V.P., & Chong, E.K.P. (1998). On Relative Convergence Properties of
Principal Component Analysis Algorithms, IEEE Transaction on Neural Networks, vol.9 ,no.2
Cocianu, C., State, L., Stefanescu, V., & Vlamos, P.(2004). On the Efficiency of a Certain Class of
Noise Removal Algorithms in Solving Image Processing Tasks. In: Proceedings of the ICINCO 2004,
Setubal, Portugal
Cocianu, C., State, L., Vlamos, P, & Stefanescu, V.(2005). PCA Based Shrinkage Attempt to Noise
Removal. In: Proceedings of the 35th ICC&IE, 2005, Istanbul, Turkey
Diamantaras, K.I., & Kung, S.Y. (1996). Principal Component Neural Networks: theory and
applications, John Wiley &Sons
Gonzales, R. , & Woods, R. (1993). Digital Image Processing, Addison Wesley
Haykin, S. (1999) Neural Networks A Comprehensive Foundation, Prentice Hall,Inc.
Hyvarinen, A., & Karhunen, J., &. Oja, E. (2001). Independent Component Analysis, John Wiley
&Sons
Hyvarinen, A., & Hoyer, P., & Oja, E. (1999). Image Denoising by Sparse Code Shrinkage,
www.cis.hut.fi/projects/ica, November, 1999
Karhunen, J., & Oja, E.. (1982). New Methods for Stochastic Approximations of Truncated KarhunenLoeve Expansions. In: Proceedings of the 6th International Conference on Pattern Recognition,
Springer Verlag
Matsuoka, K., & Kawamoto, M. (1994). A Neural Network that Self-Organizes to Perform Three
Operations Related to Principal Component Analysis, Neural Networks, vol.7,no.5
Moullin, P., & Liu, J. (1999). Analysis of Multiresolution Image Denoising Schemes Using
Generalized-Gaussian and Complexitu priors, IEEE Transactions on Information Theory, Special
Issue on Multiscale Analysis
Oja, E. (1992). Principal Components, Minor Components and Linear Neural Networks, Neural
Networks, vol. 5
Pitas, I. (1993) Digital Image Processing Algorithms. Prentice Hall
Sonka, M., & Hlavac, V. (1997) Image Processing, Analyses and Machine Vision, Chapman & Hall
Computing
State, L., Cocianu, C., & Vlamos, P. (2001). A Regressive Technique of Image Restorations. In:
Proceedings of the 29th ICC&IE, Nov. 1-3, 2001, Montreal, Canada
State, L., Cocianu, C., Stefanescu, V., & Vlamos, P. (2005) Noise Removal Algorithm Based on Code
Shrinkage Techniques. In: Proceedings of the WMSCI2005, Orlando, USA

Potrebbero piacerti anche