Sei sulla pagina 1di 8

Invention Journal of Research Technology in Engineering & Management (IJRTEM) ISSN: 2455-3689

www.ijrtem.com ǁ Volume 1 ǁ Issue 10 ǁ 2017 ǁ

PAN Sharpening of Remotely Sensed Images using Undecimated


Multiresolution Decomposition
S. Hajeera Banu1, Dr. B. Sathya Bama2
1
(Department of Electronics and Communication, Thiagarajar College of Engineering, India)
2
(Department of Electronics and Communication, Thiagarajar College of Engineering, India)

ABSTRACT : In many applications satellite images are used on the basis of resolution, where a high resolution is one of the
major issues in the remotely sensed image. In this paper, we propose a new pan-sharpening technique to enhance the resolution of
the satellite image by injecting the high frequency details from High-Resolution Panchromatic (HRP) image into Low Resolution
Multi-Spectral (LRMS) image using Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT). SWT algorithm is
designed in such a way to overcome the lack of translation-invariance of DWT and is used to enhance the edges on the intermediate
stage by preserving spatial information. Translation-invariance is attained by eliminating the down samplers and up samplers
present in the DWT. Results show that the performance of the proposed fusion method is better than that of the state-of-art methods
in terms of visual quality and other several frequently used metrics, such as the Correlation Coefficient, Peak Signal to Noise Ratio
and Root Mean Square Error.
Keywords: Image Fusion, Pan-sharpening, Discrete Wavelet Transform, Stationary Wavelet Transform, Quality Metrics

INTRODUCTION
Image fusion [1], [2] is a widely adopted technique in the remotely sensed images to improve its quality. Multi-spectral sensors that
provide rich spectral information is often accompanied by panchromatic sensors that provides data of the same scene with high
spatial resolution. Pan-sharpening is an image fusion method that integrates both the information to obtain better resolution,
spatially and spectrally and thereby avoiding distortions. Over decades many pan-sharpening methods have been developed and the
classical methods are reviewed for a better insight into the technique. In the IHS (Intensity Hue and Saturation) method, three bands
of an Multispectral (MS) image are taken as three components (RGB) in a color image. The pan image replaces the intensity
component. By using the inverse IHS transform, the Pan-Sharpened image is generated. This method maintains the same spatial
resolution as panchromatic image but distorts the spectral (color) characteristics with different degree. Generalized IHS [3], [4]
method was introduced, to overcome the color distortion problems in IHS method that arises from the change of saturation in the
fusion process. Brovey Transform (BT) is a simplest method in image fusion techniques where the ratio of PAN band is multiplied
by each MS band by dividing the sum of MS bands. BT is limited to three bands and also suffers radiometric distortions due to
multiplicative factors. Thus standard image fusion methods results in the distortion of the spectral characteristics of the multispectral
data while merging panchromatic image with multispectral image. To overcome the problems in the classical methods, the trade-off
methods such as adaptive IHS [4], IHS + PCA [5], IHS + Wavelet [6], Wavelet Decomposition method [7], and Multi-scale
geometric analysis (MGA) based methods are used. The transforms such as Curvelet [8], Bandlet [9] and Contourlet [10] are few
MGA tools used in pan-Sharpening Process. Compared to classical methods, these methods have an advantage of preserving spectral
information; however, it is impossible for the MGA methods to balance the spatial resolution. Recently, the sparse-fusion based pan-
sharpening methods are becoming very popular. The difference of the sparse-fusion based [11], [12] method is that the LRMS image
is not directly injected to the HR PAN image. This method establishes a relationship between the low and high resolution images by
training a dictionary on a training set, and then according to this relationship, the information of the low-resolution image is used to
get its high-resolution version. The sparse-fusion based method gives us a new Pan-Sharpening process to improve without injecting
process. A better Pan-Sharpening result can be obtained for the images with a continuous boundary, but for discontinuous images,
the detail loss always appears [13]. Thus, the sparse-based method is not suitable for the image with Discontinuous lines. The
problem of using sparse based fusion methods is that dictionary learning is complex [14] and run time is high. In order to overcome
the dependence of Pan Sharpened results on dictionary training and the disadvantage in representing the detail information, a new
Pan sharpening algorithm is proposed using SWT and DWT that can reduce the run time and not complex when compare to the
sparse based fusion methods. SWT considerably preserves the spatial information. The reminder of the paper is structured as follows.
In Section 2, the proposed methodology is presented. Experimental results and parameter analysis is done in Section 3. Finally, a
brief summarization is discussed in Section 4.

PROPOSED PAN SHARPENING METHOD


There are various methods which have been used for pan sharpening. These conventional methods reviewed from the literature
survey results in maximum computation complexity and poor spectral characteristics. In this paper, we propose a new technique of
combining two wavelet transforms (DWT and SWT) which is helpful in enhancing the resolution of an image.

| Volume 1 | Issue 10 | www.ijrtem.com | 22 |


PAN Sharpening of Remotely Sensed Images using Undecimated Multiresolution Decomposition

HRP
LRMS

Resize to half
Registration size of HRP

Histogram Matching SWT

lAr lHr
DWT
lVr lDr

Ar Hr

Vr Dr

SWT

Ar1 Hr1

Vr1 Dr1

lAr Hr1

Vr1 Dr1

ISWT

Gh Hr

Vr Dr

IDWT

PAN-Sharpened Image
Fig.1 Graphical Representation of the Proposed Method

| Volume 1 | Issue 10 | www.ijrtem.com | 23 |


PAN Sharpening of Remotely Sensed Images using Undecimated Multiresolution Decomposition
Discrete wavelet transform is used to decompose an input image into different subbands with decimation carried out in each
decomposition. Loss of information is then compensated, which could be possible with stationary wavelet transform. The SWT is an
inherently redundant scheme where the output of each level of SWT comprises the same number of samples as the input. This clearly
shows that the proposed method helps in overcoming the problem occurred in conventional methods thereby improving the visual
quality of an image. The PAN-SHARPENING algorithm is derived by exploiting the Discrete Wavelet Transform (DWT) and
Stationary Wavelet Transform (SWT). The flow chart of the proposed algorithm is given in Fig.1. For the fusion to be efficient, the
images should be more correlated. The histogram matching serves this purpose where the intensity component of the LRMS image is
kept as a reference to match the histogram of the panchromatic image. In pan-sharpening image fusion, LRMS is enhanced by
injecting the high frequency components. Resolution is an important aspect in satellite imaging in order to increase the quality of the
pan-sharpened image. After resolution enhancement by interpolation, preserving edges is essential because the major loss of an
image is on its high-frequency components, which is due to the smoothing caused by interpolation. This could be avoided by using
stationary wavelet transform. This method will increase the spatial resolution of the multispectral image while simultaneously
preserving its spectral information. In this work, discrete wavelet based and stationary wavelet based transforms have been employed
in order to preserve the spatial information of the high frequency components. In this correspondence, the first level discrete wavelet
based transform is used to decompose the histogram matched PAN image and to obtain the different subband images (Ar, Hr, Vr,
Dr). The 2nd level stationary wavelet transform is applied to the low frequency component (Ar) where the spatial information of the
edge details (Hr, Vr, Dr) is preserved. Simultaneously, the SWT is applied to the resized LRMS image where the spatial details of
the spectral information are preserved. The spatially preserved edge details ( hr1, vr1 , dr1 ) are injected into the spectral
information (lAr). In order to maintain both spatial and spectral information the inverse SWT is applied to the injected components
to get Gh. The edge details extracted from the first level decomposition is again injected into Gh and inverse DWT is taken to get
PAN Sharpened image.

The Proposed Pan Sharpening Algorithm comprises the following steps:


1. Get the high resolution Panchromatic (HRP) and Low Resolution Multi-spectral (LRMS) image.
2. Register the low-resolution Multispectral image and high resolution Panchromatic image. Then perform histogram matching
between the Panchromatic Image and the registered Multi-spectral Image.
3. To the histogram matched image the Discrete Wavelet Transform (DWT) is applied to get the four components: LL (Ar), LH (Hr),
HL (Vr) and HH (Dr).
4. Next, to the LL component (Ar) the Stationary Wavelet Transform (SWT) is applied and its edge information is preserved.
5. SWT is applied to the LRMS resized image from which the approximation information (lAr) is preserved.
6. The details (Hr1, Vr1, Dr1 ) of HRP are injected into the preserved approximation component (lAr) of LRMS.
7. To the output image, Inverse SWT (ISWT) is applied to which the details (Hr, Vr, Dr ) from the 1st level wavelet decomposition
is injected.
8. Again, inverse DWT (IDWT) is applied to get a PAN-SHARPENED image.

EXPERIMENTAL RESULTS
To demonstrate the effectiveness of the proposed method, simulated groups of experiments were carried out with satellite sensor
data, i.e.,WorldView-2. The WorldView-2 data set covers building, farm land and so on areas of Madurai, Tamilnadu, India. The
corresponding latitude and longitude values are 9.99384052 and 78.14683769. The image consists of one Pan and eight
Multispectral bands with 0.46m and 1.84m spatial resolutions, respectively. The spectral ranges of the MS bands span from the
visible to the NIR, and include coastal (400–450 nm), blue (450–510 nm), green (510–580 nm), yellow (585–625 nm), red (630–690
nm), red edge (705–745 nm), near-infrared 1 (770–895 nm) and near-infrared 2 (860–1040 nm). The spectral range of the Pan
image covers the interval of 450–800 nm. These data were used as Multi-Spectral and PAN test images to evaluate and compare the
performance of the following fusion methods:
• Component substitute method (IHS, BROVEY)
• Multi resolution Analysis (A’Trous algorithm)

Since component substitution method like IHS, Gram Schmidt (GS), BROVEY transform deals with only R,G,B bands, it does not
able to account for local dissimilarities between PAN and MS image which in turn produces spectral distortions. In wavelet based
methods, A’Trous Wavelet Transform (ATWT) fails to preserve the radiometric signal of low resolution multispectral but maintains
the relative radiometric signature between LRM bands. Additive Wavelet Luminance Proportional (AWLP) is said to be an improved
method where it can be extended upto L Bands. Experiments have been carried out with the Worldview2 source panchromatic image
of size 2048X2048 and multispectral image of size 512X512. For comparison, several other methods like Gram Schmidt Adaptive
(GSA), Generalized Laplacian Pyramid (GLP), Indusion, Partial Replacement Adaptive Component Substitution (PRACS) are also
used to fuse the two images of LRMS and HRP. In this paper, data fusion experiment is carried out with different methods. By

| Volume 1 | Issue 10 | www.ijrtem.com | 24 |


PAN Sharpening of Remotely Sensed Images using Undecimated Multiresolution Decomposition
visual comparison of fused images with the original MS image, it can be seen that all the methods can effectively pan -sharpen the
LRMS image. Among the various fusion results, the most serious distortion in spectral characteristics exists in the fusion result of
the GS method, due to the modification of the low frequencies of the original MS image. Here, it can be observed that the fused
images of the AWLP method is blurred to some degree. The results of the proposed method exhibits rich, detailed spatial
information since the idea of no decimation saves the detail coefficients. It can be clearly seen that the proposed method have more
advantages than the other methods in maintaining the spectral information of the original MS image. To facilitate a comparison, a
detailed region is shown in the top left corner of each image. Thus proposed method acquires the best evaluation results. Six typical
quality metrics with reference and Seven typical quality metrics without reference were adopted to evaluate the performance
comparison of our proposed method with other conventional methods. It provides unique measures of the fusion performance for all
the MS bands. The proposed method was compared with commonly used image fusion methods, namely ATWT, AWLP, Band
Dependent (BDSD), BROVEY, GS, GSA, IHS, Indusion, GLP, PRACS.

(a) (b) (c) (d)

(e) (f) (g) (h)

(i) (j) (k) (l)


Fig.2 (a) ATWT (b) AWLP (c) BDSD (d) BROVEY (e) GS (f) GSA (g) IHS (h) Indusion (i) GLP (j) PRACS ( k) Proposed result
(l) Highlighted Region of Proposed Result.

| Volume 1 | Issue 10 | www.ijrtem.com | 25 |


PAN Sharpening of Remotely Sensed Images using Undecimated Multiresolution Decomposition
From the experiments, we evaluate the Performance results of our proposed method using some Quality measures and found that our
proposed fusion method provides good results. To quantify the behavior of the proposed image fusion methods, we computed the
some quality metrics between the different earlier methods and proposed methods. Some of the quality metrics with reference [12]
are as follows:

3.1 Correlation Coefficient (  ):


It is used to compute the similarity of spectral features between the reference and fused images. The value of CC should be close to
+1, which indicates the reference and fused images are same [12]. Variation increases when the value of CC is less than 1.
 [( X i, j  x ).( Xˆ i , j  xˆ )
 i, j
(1)
 [( X i , j  x ) 2 ]. [( Xˆ i , j  x ) 2 ]
i, j i, j

Where x and x̂ are the mean values of the original image X i , j and the pan-sharpened image Xˆ i , j , respectively.

3.2 Peak Signal to Noise Ratio (PSNR):


It is widely used metric. It is the ratio of the maximum power of a signal to power of corrupting noise that affects the fidelity of its
representation. When the value is high, the fused and reference images are similar. A higher value indicates superior fusion.
 
 
PSNR  20 log  L2  (2)
10  1
2 
M N

 MN  ( I r (i, j )  I f (i, j )) 
 i 1 j 1 
where I r and I f are the reference and fused image respectively.

3.3 Degree of Distortion (DD):


It is often used to compute the degree of distortion of fused image. Smaller the value of D indicates that the fused image as good
quality [12].
1 M N
D  X i, j  Yi, j
MN i 1 j 1
(3)

Where X i , j is the pixel value of the original image and Yi , j is the pixel value of the pan-sharpened image.

3.4 Root Mean Square Error (RMSE):


It is the comparison of the difference between reference and fused images by directly computing the variation in pixel values. The
RMSE value is zero when the fused image is close to the reference image. RMSE is a good indicator of the spectral quality of fused
image [12].
1 M N
RMSE   ( I r (i, j )  I f (i, j)) 2
MN i 1 j 1
(4)

Where I r and I f are the reference and fused image respectively.

3.5 Structural Similarity Index (SSIM):


SSIM is used to compare the local patterns of pixel intensities between the reference and fused images. The range varies between -1
to 1.The value 1 indicates the reference and fused images are similar.
(2 I r  I f  C1 )(2 I r I f  C2 )
SSIM  (5)
(  2 I r   2 I f  C1 )( 2 I r   2 I f  C2 )

| Volume 1 | Issue 10 | www.ijrtem.com | 26 |


PAN Sharpening of Remotely Sensed Images using Undecimated Multiresolution Decomposition

Where I I
r f
are the mean value of original image and Pan-sharpened image respectively and  I , I
r f
are the variances of
original and Pan-Sharpened image.

TABLE 1.Computed Quality Assessment Values With Reference of Various Methods


Parameter MSE PSNR RMSE DD CC SSIM
ATWT 804.9 19.07 28.37 19.55 0.811 0.658
AWLP 651.1 19.99 25.52 17.73 0.848 0.718

BDSD 1305 16.97 36.12 26.51 0.736 0.530


BROVEY 1107 17.69 33.28 24.60 0.719 0.501
GS 962.9 18.29 31.03 22.84 0.756 0.553
GSA 823.4 18.97 28.69 21.12 0.827 0.677
IHS 669.5 19.87 25.87 19.46 0.833 0.670
Indusion 1333 16.88 36.52 28.25 0.741 0.411
GLP 582.1 20.48 24.13 16.53 0.868 0.753
PRACS 611.4 20.27 24.73 17.69 0.858 0.736
Proposed Method 129.7 27.00 11.39 7.7934 0.9880 0.9656

TABLE 1 shows the comparative quality assessment values of several fusion methods and proposed method. Hence, Correlation
coefficient (CC), Degree of Distortion (DD), Peak Signal to noise ratio (PSNR), Root Mean Square Error (RMSE) and SSIM shows
better results when compared to other fusion methods.

Some of the quality metrics without reference [15] are as follows:


3.6 Spectral Distortion Index:
A spectral distortion index can be defined as the difference between low resolution MS image and fused image. A spectral distortion
index, referred to as Dλ, is calculated as,
L L
L
D   Q( Fl , Fr )  Q( Fl ', Fr ')
p

L( L  1) l 1 r 1
p

r 1
(6)
with p being a positive integer exponent chosen to highlight large spectral differences: as p increases, large components are given
more relevance [15]. The index is proportional to the p-norm of the difference matrix, being equal to 0, if and only if the two
matrices are identical. If negative values obtained by anti-correlated bands, are clipped below zero; then, the index is always less
than or equal to one.
3.7 Spatial Distortion Index:
A spatial distortion index Ds can be calculated as:

1 L
 Q( Fl , P)  Q( Fl ', P ')
 p q
Ds
 L l 1 (7)
in which P is the PAN image, and a spatial degradation of the PAN image is obtained by filtering with a lowpass filter having
normalized frequency cutoff at the resolution ratio between MS and PAN, followed by decimation. Ds is proportional to the q-norm
of the difference vector, where q may be chosen so as to emphasize higher difference values. When the two vectors are identical, Ds
attains its minimum (equal to zero).
3.8 Jointly Spectral and Spatial Quality Index:
Dλ and Ds respectively measure changes in spectral behavior occurring between the resampled original and the fused images. It may
not be sufficient to evaluate the ranking performance of fusion methods. Instead Quality with No Reference (QNR) is introduced

| Volume 1 | Issue 10 | www.ijrtem.com | 27 |


PAN Sharpening of Remotely Sensed Images using Undecimated Multiresolution Decomposition
[15], which is the product of the spatial and spectral distortion indices, each raised to a real-valued exponent that attributes the
relevance of spectral and spatial distortions to the overall quality.

1  D  . 1  Ds 
  
QNR
 (8)
Jointly these two exponents determine the non-linearity of response in the interval [0,1] like gamma correction, to achieve a better
discrimination of the fusion results compared: when the both spectral and spatial distortions are zero, QNR is obtained as one. The
main advantage of this index is that, the global quality of a fusion product can be assessed at the full scale of PAN, inspite of the lack
of a reference data set.

3.9 Spectral Angle Mapper (SAM):


It denotes the absolute value of the spectral angle between two vectors a and a’. If the resultant value equals zero means, there is no
spectral distortion but radiometric distortion is possible (the two pixel vectors have different lengths but they are parallel) [15]. SAM
is measured in radians or degrees and is averaged over the entire image to obtain a global spectral distortion measurement.
 a, a ' 
SAM  arccos  
 a . a' 
 2 2  (9)

TABLE 2.Computed Quality Assessment Values Without Reference of Various Methods


Parameter D_lambda D_S QNRI SAM
ATWT 0.0209 0.0301 0.9496 1.1018
AWLP 0.0207 0.0303 0.9496 1.1156

BDSD 0.0076 0.0320 0.9606 1.4978

BROVEY 0.0074 0.0707 0.9224 1.1173


GS 0.0242 0.0725 0.9051 1.3231
GSA 0.0470 0.0759 0.8807 1.1825
IHS 0.0242 0.0723 0.9052 1.2798
Indusion 0.0485 0.3593 0.6096 1.4460
GLP 0.0344 0.0420 0.9250 1.0865
PRACS 0.0283 0.0456 0.9274 1.2119
Proposed Method 0.0093 0.0048 0.9859 1.9973

TABLE 2 shows the comparative quality assessment values of several fusion methods and proposed method. Hence, D_lambda, D_S,
QNRI, shows better results when compared to other fusion methods.

CONCLUSION
This proposed work owes its advantage in its simplicity and reducing the spectral and spatial distortions. This uses DWT that
decompose an image into four subbands which is then followed by SWT. The subbands having high frequency components (details)
are interpolated and ISWT have been taken for the interpolated image. ISWT is followed by IDWT to get a high resolution
multispectral image. SWT helps in preserving the edges. The proposed technique has been compared with some existing methods
and found to acheive superior visual results. Quality measures like Correlation Coefficient and PSNR seems to be higher for the
proposed method when compared to the conventional techniques. The best Dλ value is obtained by the proposed method, which
implies that this method can effectively preserve the spectral information in the fusion process. Overall, the quantitative assessment
results are consistent with the visual evaluation and is quite robust. It performs well and achieves the best fusion result. Therefore, it
can be concluded that the Proposed Pan Sharpening Method performs the best with respect to both the spatial and spectral
perspectives.

| Volume 1 | Issue 10 | www.ijrtem.com | 28 |


PAN Sharpening of Remotely Sensed Images using Undecimated Multiresolution Decomposition

REFERENCES
[1] T. Taxt and A. H. Schistad-Solberg, ―Data fusion in remote sensing,” in Fifth Workshop on Data Analysis in Astronomy, V. Di Gesu and L.
Scarsi, Eds., Erice, Italy, Nov. 1996.
[2] T. Ranchin, B. Aiazzi, L. Alparone, S. Baronti, and L. Wald, “Image fusion—The ARSIS concept and some successful implementation
schemes,‖ ISPRS J. Photogramm. Remote Sens., vol. 58, no. 1/2, pp. 4–18, Jun. 2003.
[3] T.-M. Tu, S.-C. Su, H.-C. Shyu, and P. S. Huang, ―A new look at IHS-like image fusion methods,‖ Inf. Fusion, vol. 2, no. 3, pp. 177–186,
Sep. 2001.
[4] S. Rahmani, M. Strait, D. Merkurjev, M. Moeller, and T. Wittman, ―An adaptive HIS pan-sharpening method,‖ IEEE Geosci. Remote Sens.
Lett., vol. 7, no. 4, pp. 746–750, Oct. 2010.
[5] M. González-Audícana, J. L. Saleta, R. G. Catalán, and R. García, ―Fusion of multispectral and panchromatic images using improved
IHS and PCA mergers based on wavelet decomposition,‖ IEEE Trans. Geosci. Remote Sens., vol. 42, no. 6, pp. 1291–1299, Jun.2004.
[6] C. Ballester, V. Caselles, L. Igual, J. Verdera, and B. Rouge, “A variational model for P+XS image fusion,” Int. J. Comput. Vis., vol. 69, no.
1, pp. 43–58, Aug. 2006.
[7] J. Núnez, X. Otazu, O. Fors, A. Prades, V. Pala, and R. Arbiol, “Multiresolution-based image fusion with additive wavelet decomposition,”
IEEE Trans. Geosci. Remote Sens., vol. 37, no. 3, pp. 1204–1211, May 1999.
[8] A. Garzeli, F. Nencini, L. Alparone, and S. Baronti, “Multiresolution fusion of multispectral and panchromatic images through the curvelet
transform,‖ Proc. IEEE IGARSS, vol. 4, pp. 2838–2841, Jul. 2005.
[9] X. Qu, J. Yan, G. Xie, Z. Zhu, and B. Chen, “A novel image fusion algorithm based on bandelet transform” Chin. Opt. Lett., vol. 5, no. 10,
pp. 569–572, Oct. 2007.
[10] S. Y. Yang, M. Wang, L. C. Jiao, R Wu, and Z Wang, “Image fusion based on a new contourlet packet ” Inf. Fusion, vol. 11, no. 2, pp. 78–
84, Apr. 2010.
[11] Cheng Jiang, Hongyan Zhang, Huanfeng Shen, and Liangpei Zhang, “Two-Step Sparse Coding for the Pan-Sharpening of Remote Sensing
Images‖ IEEE Trans. Geosci. Remote Sens., vol. 5, no. 7, 1792, May 2014.
[12] Xiao Xiang Zhu, Member, IEEE, and Richard Bamler, “A Sparse Image Fusion Algorithm With Application to Pan-Sharpening”, IEEE
Trans. Geosci. Remote Sens., Vol. 5, No. 7, 2827-2836, May 2013.
[13] Dehong Liu, Petros T. Boufounos, ―Dictionary Learning Based Pan-Sharpening‖ IEEE Trans. Geosci. Remote Sens., Vol. 5, No. 7, 2397-
2400, Jan 2012.
[14] Min Guo ,et al.“An Online Coupled Dictionary Learning Approach for Remote Sensing Image Fusion” IEEE Journal of selected topics in
applied earth observations and remote sensing, vol. 7, no. 4, april 2014
[15] Luciano Alparone, Bruno Aiazzi, et al.,“Multispectral and Panchromatic Data Fusion Assessment Without Reference” Photogrammetric
Engineering & Remote Sensing, 2008.

| Volume 1 | Issue 10 | www.ijrtem.com | 29 |

Potrebbero piacerti anche