Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
ABSTRACT : In many applications satellite images are used on the basis of resolution, where a high resolution is one of the
major issues in the remotely sensed image. In this paper, we propose a new pan-sharpening technique to enhance the resolution of
the satellite image by injecting the high frequency details from High-Resolution Panchromatic (HRP) image into Low Resolution
Multi-Spectral (LRMS) image using Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT). SWT algorithm is
designed in such a way to overcome the lack of translation-invariance of DWT and is used to enhance the edges on the intermediate
stage by preserving spatial information. Translation-invariance is attained by eliminating the down samplers and up samplers
present in the DWT. Results show that the performance of the proposed fusion method is better than that of the state-of-art methods
in terms of visual quality and other several frequently used metrics, such as the Correlation Coefficient, Peak Signal to Noise Ratio
and Root Mean Square Error.
Keywords: Image Fusion, Pan-sharpening, Discrete Wavelet Transform, Stationary Wavelet Transform, Quality Metrics
INTRODUCTION
Image fusion [1], [2] is a widely adopted technique in the remotely sensed images to improve its quality. Multi-spectral sensors that
provide rich spectral information is often accompanied by panchromatic sensors that provides data of the same scene with high
spatial resolution. Pan-sharpening is an image fusion method that integrates both the information to obtain better resolution,
spatially and spectrally and thereby avoiding distortions. Over decades many pan-sharpening methods have been developed and the
classical methods are reviewed for a better insight into the technique. In the IHS (Intensity Hue and Saturation) method, three bands
of an Multispectral (MS) image are taken as three components (RGB) in a color image. The pan image replaces the intensity
component. By using the inverse IHS transform, the Pan-Sharpened image is generated. This method maintains the same spatial
resolution as panchromatic image but distorts the spectral (color) characteristics with different degree. Generalized IHS [3], [4]
method was introduced, to overcome the color distortion problems in IHS method that arises from the change of saturation in the
fusion process. Brovey Transform (BT) is a simplest method in image fusion techniques where the ratio of PAN band is multiplied
by each MS band by dividing the sum of MS bands. BT is limited to three bands and also suffers radiometric distortions due to
multiplicative factors. Thus standard image fusion methods results in the distortion of the spectral characteristics of the multispectral
data while merging panchromatic image with multispectral image. To overcome the problems in the classical methods, the trade-off
methods such as adaptive IHS [4], IHS + PCA [5], IHS + Wavelet [6], Wavelet Decomposition method [7], and Multi-scale
geometric analysis (MGA) based methods are used. The transforms such as Curvelet [8], Bandlet [9] and Contourlet [10] are few
MGA tools used in pan-Sharpening Process. Compared to classical methods, these methods have an advantage of preserving spectral
information; however, it is impossible for the MGA methods to balance the spatial resolution. Recently, the sparse-fusion based pan-
sharpening methods are becoming very popular. The difference of the sparse-fusion based [11], [12] method is that the LRMS image
is not directly injected to the HR PAN image. This method establishes a relationship between the low and high resolution images by
training a dictionary on a training set, and then according to this relationship, the information of the low-resolution image is used to
get its high-resolution version. The sparse-fusion based method gives us a new Pan-Sharpening process to improve without injecting
process. A better Pan-Sharpening result can be obtained for the images with a continuous boundary, but for discontinuous images,
the detail loss always appears [13]. Thus, the sparse-based method is not suitable for the image with Discontinuous lines. The
problem of using sparse based fusion methods is that dictionary learning is complex [14] and run time is high. In order to overcome
the dependence of Pan Sharpened results on dictionary training and the disadvantage in representing the detail information, a new
Pan sharpening algorithm is proposed using SWT and DWT that can reduce the run time and not complex when compare to the
sparse based fusion methods. SWT considerably preserves the spatial information. The reminder of the paper is structured as follows.
In Section 2, the proposed methodology is presented. Experimental results and parameter analysis is done in Section 3. Finally, a
brief summarization is discussed in Section 4.
HRP
LRMS
Resize to half
Registration size of HRP
lAr lHr
DWT
lVr lDr
Ar Hr
Vr Dr
SWT
Ar1 Hr1
Vr1 Dr1
lAr Hr1
Vr1 Dr1
ISWT
Gh Hr
Vr Dr
IDWT
PAN-Sharpened Image
Fig.1 Graphical Representation of the Proposed Method
EXPERIMENTAL RESULTS
To demonstrate the effectiveness of the proposed method, simulated groups of experiments were carried out with satellite sensor
data, i.e.,WorldView-2. The WorldView-2 data set covers building, farm land and so on areas of Madurai, Tamilnadu, India. The
corresponding latitude and longitude values are 9.99384052 and 78.14683769. The image consists of one Pan and eight
Multispectral bands with 0.46m and 1.84m spatial resolutions, respectively. The spectral ranges of the MS bands span from the
visible to the NIR, and include coastal (400–450 nm), blue (450–510 nm), green (510–580 nm), yellow (585–625 nm), red (630–690
nm), red edge (705–745 nm), near-infrared 1 (770–895 nm) and near-infrared 2 (860–1040 nm). The spectral range of the Pan
image covers the interval of 450–800 nm. These data were used as Multi-Spectral and PAN test images to evaluate and compare the
performance of the following fusion methods:
• Component substitute method (IHS, BROVEY)
• Multi resolution Analysis (A’Trous algorithm)
Since component substitution method like IHS, Gram Schmidt (GS), BROVEY transform deals with only R,G,B bands, it does not
able to account for local dissimilarities between PAN and MS image which in turn produces spectral distortions. In wavelet based
methods, A’Trous Wavelet Transform (ATWT) fails to preserve the radiometric signal of low resolution multispectral but maintains
the relative radiometric signature between LRM bands. Additive Wavelet Luminance Proportional (AWLP) is said to be an improved
method where it can be extended upto L Bands. Experiments have been carried out with the Worldview2 source panchromatic image
of size 2048X2048 and multispectral image of size 512X512. For comparison, several other methods like Gram Schmidt Adaptive
(GSA), Generalized Laplacian Pyramid (GLP), Indusion, Partial Replacement Adaptive Component Substitution (PRACS) are also
used to fuse the two images of LRMS and HRP. In this paper, data fusion experiment is carried out with different methods. By
Where x and x̂ are the mean values of the original image X i , j and the pan-sharpened image Xˆ i , j , respectively.
MN ( I r (i, j ) I f (i, j ))
i 1 j 1
where I r and I f are the reference and fused image respectively.
Where X i , j is the pixel value of the original image and Yi , j is the pixel value of the pan-sharpened image.
Where I I
r f
are the mean value of original image and Pan-sharpened image respectively and I , I
r f
are the variances of
original and Pan-Sharpened image.
TABLE 1 shows the comparative quality assessment values of several fusion methods and proposed method. Hence, Correlation
coefficient (CC), Degree of Distortion (DD), Peak Signal to noise ratio (PSNR), Root Mean Square Error (RMSE) and SSIM shows
better results when compared to other fusion methods.
L( L 1) l 1 r 1
p
r 1
(6)
with p being a positive integer exponent chosen to highlight large spectral differences: as p increases, large components are given
more relevance [15]. The index is proportional to the p-norm of the difference matrix, being equal to 0, if and only if the two
matrices are identical. If negative values obtained by anti-correlated bands, are clipped below zero; then, the index is always less
than or equal to one.
3.7 Spatial Distortion Index:
A spatial distortion index Ds can be calculated as:
1 L
Q( Fl , P) Q( Fl ', P ')
p q
Ds
L l 1 (7)
in which P is the PAN image, and a spatial degradation of the PAN image is obtained by filtering with a lowpass filter having
normalized frequency cutoff at the resolution ratio between MS and PAN, followed by decimation. Ds is proportional to the q-norm
of the difference vector, where q may be chosen so as to emphasize higher difference values. When the two vectors are identical, Ds
attains its minimum (equal to zero).
3.8 Jointly Spectral and Spatial Quality Index:
Dλ and Ds respectively measure changes in spectral behavior occurring between the resampled original and the fused images. It may
not be sufficient to evaluate the ranking performance of fusion methods. Instead Quality with No Reference (QNR) is introduced
1 D . 1 Ds
QNR
(8)
Jointly these two exponents determine the non-linearity of response in the interval [0,1] like gamma correction, to achieve a better
discrimination of the fusion results compared: when the both spectral and spatial distortions are zero, QNR is obtained as one. The
main advantage of this index is that, the global quality of a fusion product can be assessed at the full scale of PAN, inspite of the lack
of a reference data set.
TABLE 2 shows the comparative quality assessment values of several fusion methods and proposed method. Hence, D_lambda, D_S,
QNRI, shows better results when compared to other fusion methods.
CONCLUSION
This proposed work owes its advantage in its simplicity and reducing the spectral and spatial distortions. This uses DWT that
decompose an image into four subbands which is then followed by SWT. The subbands having high frequency components (details)
are interpolated and ISWT have been taken for the interpolated image. ISWT is followed by IDWT to get a high resolution
multispectral image. SWT helps in preserving the edges. The proposed technique has been compared with some existing methods
and found to acheive superior visual results. Quality measures like Correlation Coefficient and PSNR seems to be higher for the
proposed method when compared to the conventional techniques. The best Dλ value is obtained by the proposed method, which
implies that this method can effectively preserve the spectral information in the fusion process. Overall, the quantitative assessment
results are consistent with the visual evaluation and is quite robust. It performs well and achieves the best fusion result. Therefore, it
can be concluded that the Proposed Pan Sharpening Method performs the best with respect to both the spatial and spectral
perspectives.
REFERENCES
[1] T. Taxt and A. H. Schistad-Solberg, ―Data fusion in remote sensing,” in Fifth Workshop on Data Analysis in Astronomy, V. Di Gesu and L.
Scarsi, Eds., Erice, Italy, Nov. 1996.
[2] T. Ranchin, B. Aiazzi, L. Alparone, S. Baronti, and L. Wald, “Image fusion—The ARSIS concept and some successful implementation
schemes,‖ ISPRS J. Photogramm. Remote Sens., vol. 58, no. 1/2, pp. 4–18, Jun. 2003.
[3] T.-M. Tu, S.-C. Su, H.-C. Shyu, and P. S. Huang, ―A new look at IHS-like image fusion methods,‖ Inf. Fusion, vol. 2, no. 3, pp. 177–186,
Sep. 2001.
[4] S. Rahmani, M. Strait, D. Merkurjev, M. Moeller, and T. Wittman, ―An adaptive HIS pan-sharpening method,‖ IEEE Geosci. Remote Sens.
Lett., vol. 7, no. 4, pp. 746–750, Oct. 2010.
[5] M. González-Audícana, J. L. Saleta, R. G. Catalán, and R. García, ―Fusion of multispectral and panchromatic images using improved
IHS and PCA mergers based on wavelet decomposition,‖ IEEE Trans. Geosci. Remote Sens., vol. 42, no. 6, pp. 1291–1299, Jun.2004.
[6] C. Ballester, V. Caselles, L. Igual, J. Verdera, and B. Rouge, “A variational model for P+XS image fusion,” Int. J. Comput. Vis., vol. 69, no.
1, pp. 43–58, Aug. 2006.
[7] J. Núnez, X. Otazu, O. Fors, A. Prades, V. Pala, and R. Arbiol, “Multiresolution-based image fusion with additive wavelet decomposition,”
IEEE Trans. Geosci. Remote Sens., vol. 37, no. 3, pp. 1204–1211, May 1999.
[8] A. Garzeli, F. Nencini, L. Alparone, and S. Baronti, “Multiresolution fusion of multispectral and panchromatic images through the curvelet
transform,‖ Proc. IEEE IGARSS, vol. 4, pp. 2838–2841, Jul. 2005.
[9] X. Qu, J. Yan, G. Xie, Z. Zhu, and B. Chen, “A novel image fusion algorithm based on bandelet transform” Chin. Opt. Lett., vol. 5, no. 10,
pp. 569–572, Oct. 2007.
[10] S. Y. Yang, M. Wang, L. C. Jiao, R Wu, and Z Wang, “Image fusion based on a new contourlet packet ” Inf. Fusion, vol. 11, no. 2, pp. 78–
84, Apr. 2010.
[11] Cheng Jiang, Hongyan Zhang, Huanfeng Shen, and Liangpei Zhang, “Two-Step Sparse Coding for the Pan-Sharpening of Remote Sensing
Images‖ IEEE Trans. Geosci. Remote Sens., vol. 5, no. 7, 1792, May 2014.
[12] Xiao Xiang Zhu, Member, IEEE, and Richard Bamler, “A Sparse Image Fusion Algorithm With Application to Pan-Sharpening”, IEEE
Trans. Geosci. Remote Sens., Vol. 5, No. 7, 2827-2836, May 2013.
[13] Dehong Liu, Petros T. Boufounos, ―Dictionary Learning Based Pan-Sharpening‖ IEEE Trans. Geosci. Remote Sens., Vol. 5, No. 7, 2397-
2400, Jan 2012.
[14] Min Guo ,et al.“An Online Coupled Dictionary Learning Approach for Remote Sensing Image Fusion” IEEE Journal of selected topics in
applied earth observations and remote sensing, vol. 7, no. 4, april 2014
[15] Luciano Alparone, Bruno Aiazzi, et al.,“Multispectral and Panchromatic Data Fusion Assessment Without Reference” Photogrammetric
Engineering & Remote Sensing, 2008.