0 valutazioniIl 0% ha trovato utile questo documento (0 voti)

12 visualizzazioni105 paginedoc

Oct 26, 2018

Document(5)

© © All Rights Reserved

DOCX, PDF, TXT o leggi online da Scribd

doc

© All Rights Reserved

0 valutazioniIl 0% ha trovato utile questo documento (0 voti)

12 visualizzazioni105 pagineDocument(5)

doc

© All Rights Reserved

Sei sulla pagina 1di 105

CHAPTER 1

INTRODUCTION

advanced fusion techniques including fusion framework, schemes and algorithms.

The main purpose is the integration of disparate and complementary data to enhance

the information apparent in the images as well as to increase the reliability of the

interpretation as shown in Figure 1.1.

1.2 Motivation:

Fusion leads to more accurate data [1] and increased utility and it can also

improve the quality and increase the application of these data. Combine higher spatial

information in one band with higher spectral information in another dataset to create

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

advancements in technology, it is now possible to obtain information from multi

source images to produce a high quality fused image with spatial and spectral

information. The main aim of image fusion is to

Retain important information

Create new image that is more suitable for the purposes of human/machine

perception or for further processing tasks.

increased confidence, reduced ambiguity, improved reliability and improved

classification [2]. It is more suitable for visual perception and for digital processing.

Image fusion is generally applied to digital imagery for the following applications that

are valuable in human life such as:

1) Medical imaging

2) Microscopic imaging

3) Remote sensing

4) Robotics

5) Battle field surveillance

6) Automated target recognition

7) Guidance and control of autonomous vehicle.

Our project is related to multi modal medical image fusion, Generally for a

physician to analyze the condition of a patient, in most of cases he needs to study

different images like MRI, CT, PET, SPECT etc. This is a time taking process. So, our

idea is to fuse all these images into a single image to provide better diagnosis.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

1.3 Objective:

techniques like wavelet, curvelet, contourlet.

2) Specific objective: To compare and analyze performance metrics of fused image

obtained from proposed methods those are wavelet-contourlet, curvelet-contourlet

transforms and compares it with existing method in terms of PSNR, MSE and

Entropy, in order to come out with best fusion technique to get better diagnosis.

In the recent years, Multimodal image fusion algorithms and devices, has

evolved as a powerful tool in the clinical applications, of medical imagining

techniques. It has shown significant achievements in improving clinical accuracy of

diagnosis based on medical images. The main motivation is to produce most relevant

information from different sources into a single output, which plays a crucial role in

medical diagnosis.

Medical imaging has gained significant attention due to its predominant role in

health care. Some of the different types of imaging modalities used now-a-days are X-

ray, computed tomography (CT), magnetic resonance imaging (MRI), magnetic

resonance angiography (MRA), etc., These imaging techniques are used for extracting

clinical information, which are although complementary in nature most of the times,

some are unique depending on the specific imaging modality used.

For example,

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

CT: Is used to provide more accurate information about calcium deposit, air and

dense structures like bones with less distortion, acute bleeds and tumours. But it

cannot detect physiological changes.

MRI: Under strong magnetic field and radio-wave energy, information about Nervous

system, structural abnormalities of soft tissue, muscles can be better visualized.

relative changes over time to be monitored as a disease process evolves or in response

to a specific stimulus by looking at blood flow, metabolism, neurotransmitters, and

radio-labelled drugs.

metabolic information. It helps to diagnose and stage a cancer.

procedure using MRI technology that measures brain activity by detecting changes

associated with blood flow

Hence, we can understand none of these modalities are able to carry all

relevant information in a single image. So, that anatomical and functional medical

images are needed to be combined for a concise view. For this purpose, the

multimodal medical image fusion has been identified as a source with better potential.

It aims to integrate information from multiple modalities to obtain a more complete

and accurate description of the same object which facilitate in more precise diagnosis

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

and better treatment. Fused image provides higher accuracy and reliability by

removing redundant information.

The applications of image fusion are found in radiology, molecular and brain

imaging, oncology, diagnosis of cardiac diseases, neuro-radiology and ultrasound.

Multimodal medical image fusion helps in diagnosing diseases, and also cost effective

by minimising storage to a single fused image instead of multiple-source images.

We have so many fusion techniques to perform medical image fusion but till

now no one will provide better results. Fusion techniques are broadly classified two

group’s spatial and spectral domain. Spatial domain transform directly deals with

pixels of an image. It leads to spatial distortion in the fused image. It does not give

directional information and also leads to spectral distortion, while the arithmetic

combination will lose original details as a result of low contrast of the fused image. It

becomes a negative factor while we go for further processing, such as classification

problem, of the fused image.

the source image into sub-bands which are then selectively processed using

appropriate fusion algorithm. In frequency domain methods the image is first

transferred in to frequency domain. It means that the Fourier Transform of the image

is computed first. All the Fusion operations are performed on the Fourier transform of

the image and then the Inverse Fourier transform is performed to get the resultant

image.

Till now we have wavelet, curvelet, and contourlet transforms, the resultant

fused image from these individual transformations doesn’t yield good fused image. In

wavelet transform it provide multi resolution fused image, but it fails to capture

curved edge information. This can be overcome by curvelet but it has limited

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

doesn’t provide multi resolution as good as wavelet. For that reason our idea is to

perform hybrid fusion method based on the combinations of wavelet, curvelet and

contourlet for medical image fusion is proposed.

For each source image it is proposed to apply combinations of wavelet,

curvelet and contourlet taking two transforms at any time. For the obtained fused

images in above cases, it is proposed to compute performance metrics like Entropy,

Peak signal to noise ratio and mean square error so as to come out with best

combination of transformations that yield highly informative fused image in order to

provide better medical diagnosis. Proposed method will be simulated using MATLAB

tool.

types of image fusion, levels in image fusion and fusion techniques. Chapter 3 deals

with existing method which is wavelet-curvelet image fusion technique and its

limitations. Chapter 4 deals with our proposed methods which are wavelet-contourlet

and curvelet-contourlet image fusion techniques Chapter 5 deals with result and

analysis Chapter 6 derives conclusion and future scope of our project.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

CHAPTER 2

LITERATURE SURVEY

Based on the input data of the fusion process and also based on the purpose of

fusion, fusion can be classified into the following types:

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Fusion of images coming from different sensors (CT, MRI, visible, infrared,

ultra violet etc) is called Multi Modal Image fusion. It is used to decrease the amount

of data, to emphasize band-specific information. In our Project, we are focusing on

this type of image fusion.

he needs to study different images like MRI, CT, PET, SPECT etc. This is a time

taking process. So, our idea is to fuse all these images into a single image to provide

better diagnosis. The fusion of NMR and PET is considered as shown in Figure 2.1.

distance, all subjects at that distance are not sharply focused. A possible way to solve

this problem is by image fusion, in which one can acquire a series of pictures with

different focus settings and fuse them to produce a single image with extended depth

of field. The fusion of multifocal images as shown in Figure 2.2.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

(a)Near focused image (b) Far focused image (c) Fused image

It is defined as fusion of images from the same modality and taken at the same

time but from different viewpoints. A non-blind, shift-invariant image processing

technique that fuses multi-view three-dimensional image data sets into a single, high

quality three-dimensional image is presented in Figure 2.3.

Figure 2.3: Detection results of the motion-based tracker of the first run of the subject

“Alba”, for all camera views

It is effective for

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

changes between them or to synthesize realistic images of objects which were not

photographed in a desired time. It is explained clearly in Figure 2.4.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Figure 2.5. It explains clearly about various categories of fusion techniques that

implemented using appropriate level of abstraction.

1) Pixel level

2) Feature level

3) Decision level

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

image in which information associated with each pixel is selected from a set of pixels

in the source images to get fused image. It is also called pixel level image fusion.

segmentation routine employed in only one of the input sensors (denoted by sensor)

A). The general block diagram of feature based fusion is shown in Figure 2.6.

The object segmentation routine is used only to bootstrap the feature selection

process, and hence any method that provides even rough, incomplete object

segmentation can be employed at this stage. Feature fusion techniques are used to

increase the accuracy of the feature measurement. Data fusion techniques at the

feature level rely on feature attribute combination techniques such as Kalman

filtering.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

effectively combined at the highest level of abstraction. A common type of symbol-

level fusion is decision fusion. Symbolic level fusion or decision level fusion is used

to increase the probability of a symbol representing a decision. Data fusion techniques

at the symbol level rely on logical and statistical inference techniques such as

Bayesian analysis, Dempster Shafer evidential reasoning, and fuzzy set theory.

Techniques are required to effectively fuse symbolic data from multiple sensors

for the purpose of identification. The general block diagram of decision level fusion is

given Figure 2.7. This can be difficult when the sensors provide complementary

information or the sensors provide different levels of information. Here various

parameters of levels of fusion such as pixel level, feature level, decision level as list

out in tabular form as shown below in Table 2.1.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

information

Information Loss Minimum Medium Maximum

Dependence of the Maximum Medium Minimum

sensor

Immunity The worst Medium The best

Detection The best Medium The worst

Performance

Image registration is one of the pre-processing techniques which align data sets in

an image using feature base algorithm. Before performing fusion we have to set images

to pre-processing stage.

The system level considerations that are required to implement image fusion is

shown in Figure 2.8. It contains the following stages.

Image registration

Image pre-processing

Image post-processing

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

possible to obtain good fusion results. Initially the acquired image is set to pre-

processing. Image registration is one of the pre-processing techniques which align

data sets in an image using feature base algorithm.

For example if the sizes of images vary, so before fusion, the images are

needed to be resized so both the images are of the same size. This is done by

interpolating the smaller size image by rows and columns duplication.

Post Processing stage depends on type of display, fusion system is being used and the

preference of human operator.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

significantly affect the fusion results. Image registration can also be called as image

alignment, in such a way as to align the input images as perfectly as possible in order

to produce the best fusion results. If the input image datasets are not aligned to each

other, it is impossible to obtain good fusion results although fusion framework,

scheme and algorithm are optimum. Therefore, it is necessary to align or register input

images as much as possible prior to the main fusion process.

The general requirement of an image fusing process is to preserve all valid and

useful information from the source images, while at the same time it should not

introduce any distortion in resultant fused image. There are various methods that have

been developed to perform image fusion. These methods can be divided into two

types, spatial domain method and frequency domain method.

microscopic imaging, remote sensing, computer vision, and robotics. Several

approaches to image fusion can be distinguished, depending on whether the images

are fused in the spatial domain or Spectral domain. The actual fusion process can take

place at different levels of information representation adopted in several approaches.

These approaches can be divided into two types, spatial domain method and

Frequency domain method.

2) Transform domain fusion method.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Spatial domain methods work by combining the pixel values of the two or

more images to be fused in a linear or nonlinear way. The simplest form is a Weighted

Averaging method. Here, the resultant image is obtained by averaging every

corresponding pixel in the input to give the fused image.

In Frequency domain methods, the input images are decomposed into Multi-

scale coefficients initially. Various fusion rules are used in the selection or

manipulation of these coefficients and synthesized via inverse transforms to form the

fused image. The fusion techniques are classified as given in Figure 2.9.

Spatial Domain Fusion Method directly deals with image pixels by manipulating the

pixel values to achieve desired results.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

1) Averaging

2) Select maximum.

3) Weighted average method.

4) Intensity-hue-saturation (IHS) transforms.

5) Brovery.

6) Principal component analysis (PCA).

corresponding pixel in the input images. It is one of the simplest method and easy to

understand and implement. Works well when images to be fused from same type of

sensor and contain additive noise. This method proves good for certain particular

cases where in the input images have an overall high brightness and high contrast.

2. Easy to understand and implement.

3. Averaging works well when images to be fused from same type of sensor and

contain additive noise.

4. This method proves good for certain particular cases where in the input images

have an overall high brightness and high contrast.

2. With this method some noise is easily introduced into the fused image, which

will reduce the resultant image quality consequently.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

2) Select maximum: The greater the pixel values the more in focus the image. Thus

this algorithm chooses the in-focus regions from each input image by choosing the

greatest value for each pixel, resulting in highly focused output. The value of the pixel

P (i, j) of each image is taken and compared to each other. The greatest pixel value is

assigned to the corresponding pixel.

p (i , j )= { A ( i. j ) + B (i , j ) } /2 (2.2)

1) Resulting in highly focused image output obtained from the input image as

compared to average method.

1) Pixel level method is affected by blurring effect which directly affect on the

contrast of the image.

3) Weighted Average Method: In this method the resultant fused image is obtained

by taking the weighted average intensity of corresponding pixels from both the input

images.

m n

p(i , j) ∑ ∑ W A ( i, j )+ ( 1−w ) B(i , j) (2.3)

i=0 j=0

Methods based on Intensity, Hue and Saturation (IHS) transform are probably

the most popular approaches used for enhancing the spatial resolution of multi-sensor

images. The IHS method is capable of quickly merging the massive volume of data. It

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

can transform the colour space form Red (R), Green (G), Blue (B) to Hue (H),

Saturation (S) and Intensity (I) space.

The IHS colour transformation effectively separates spatial (I) and spectral (H,

S) information from a standard RGB image. It relates to the human colour perception

parameters. The mathematical context is expressed by Eq. 2.4. I relates to the

intensity, while ‘v1’ and ‘v2’ represent intermediate variables which are needed in the

transformation. H and S stand for Hue and Saturation.

[] [ ] []

I 1/√ 3 1/√ 3 1/√ 3 R

v1 = 1 /√ 6 1/√ 6 −2/√ 6 G

v2 1/√ 2 −1 /2 0 B

H= tan

−1

[ ]

v2

v1

2

v1

S= (¿ + v 22 ) (2.4)

√¿

There are two ways of applying the IHS technique in image fusion: direct and

substitutional. The first refers to the transformation of three image channels assigned

to I, H and S. The second transforms three channels of the data set representing RGB

into the IHS colour space which separates the colour aspects in its average brightness

(intensity). The schematic diagram of IHS is shown in Figure 2.10.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

contribution (hue) and its purity (saturation). Both the hue and the saturation in this

case are related to the surface reflectivity or composition. Then, one of the

components is replaced by a fourth image channel which is to be integrated. In many

published studies the channel that replaced one of the IHS components is contrast

stretched to match the latter. A reverse transformation from IHS to RGB as presented

in Eq. 2.5 converts the data into its original image space to obtain the fused image.

The IHS technique has become a standard procedure in image analysis. It serves

colour enhancement of highly correlated data, feature enhancement, the improvement

of spatial resolution and the fusion of disparate data sets.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

[] [

R

G

B

=

1 /√ 3 1/ √ 6 1/√ 2

1 /√ 3 1/ √ 6 −1/√ 2

1 /√ 3 −2/√ 6 0 ] [] I

v1

v2

(2.5)

The use of IHS technique in image fusion is manifold, but based on one

principle: the replacement of one of the three components (I, H or S) of one data set

with another image. Most commonly the intensity channel is substituted. Replacing

the intensity (sum of the bands) by a higher spatial resolution value and reversing the

IHS transformation leads to composite bands. These are linear combinations of the

original (re-sampled) multispectral bands and the higher resolution panchromatic

band.

A variation of the IHS fusion method applies a stretch to the hue saturation

components before they are combined and transformed back to RGB. This is called

colour contrast stretching. The IHS transformation can be performed either in one or

in two steps. The two step approach includes the possibility of contrast stretching the

individual I, H and S channels. It has the advantage of resulting in colour enhanced

fused imagery. A closely related colour system to IHS is the HSV: hue, saturation and

value.

1) Perform image registration (IR) to PAN and MS(Multi spectral), and resample

MS

2) Convert MS from RGB space into IHS space.

3) Match the histogram of PAN to the histogram of the I component.

4) Replace I component with PAN.

5) Convert the fused MS back to RGB space.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

2) It is a simple method to merge the images attributes.

3) It provides a better visual effect.

4) It gives the best result for fusion or remote sensing images.

2) It suffers from artefacts and noise which tends to higher contrast.

3) The major limitation that only three bands are involved

5) Brovery Transform:

the chromaticity transform and the concept of intensity modulation. It is a simple

method to merge data from different sensors, which can preserve the relative spectral

combination of each pixel but replace its overall brightness with the high spatial

resolution image. It is a combination of arithmetic operations and normalizes the

spectral bands before they are multiplied with the images. It retains the corresponding

spectral feature of each pixel and transforms all the luminance information into multi

sensor image of high resolution. The formula used for the brovery transform can be

described as follows

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

The spatial domain provide high spatial resolution and easy to perform, but it

has image blurring problem and outputs are less informative. Spatial distortion

becomes a negative factor.

1) Increases the contrast in the low and high ends of an image histogram.

2) It is a simple method to merge the data from different sensors.

3) This method is simple and fast.

4) It provide superior visual and high resolution multispectral image.

5) Very useful for visual Interpretation.

2) It should not be used if preserving the original scene radiometry is important.

3) This method ignores the requirement of high quality synthesis of spectral

information. It produces spectral distortion

analysis. PCA is a general statistical technique that transforms multivariate data with

correlated variables into one with uncorrelated variables called principal components.

These new variables are obtained as linear combinations of the original variables.

PCA has been widely used in image encoding, image data compression, image

enhancement and image fusion. In the fusion process, PCA method generates

uncorrelated images (PC1, PC2…, PCn, where n is the number of input multispectral

bands.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

PCA based fusion is very suitable for merging the MS and PAN images.

Compared to the IHS fusion, the PCA fusion has the advantage that it does not have

the three band limitation and can be applied to any number of bands at a time. As

shown in Figure 2.11.

The PCA is useful for image encoding, image data compression, image

enhancement, digital change detection, multi-temporal dimensionality and image

fusion. It is a statistical technique that transforms a multivariate data set of inter-

correlated variables into a data set of new un-correlated linear combinations of the

original variables. It generates a new set of axes which are orthogonal. The approach

for the computation of the principal components (PCs) comprises the calculation of:

2) Eigen values and eigenvectors

3) PCs

An inverse PCA transforms the combined data back to the original image

space. The use of the correlation matrix implies a scaling of the axes so that the

features receive a unit variance. It prevents certain features from dominating the

image because of their large digital numbers. The signal-to-noise ratio (SNR) is

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

significantly improved applying the standardized PCA. Better results are obtained if

the statistics are derived from the whole study area rather than from a subset area.

Two types of PCA can be performed: selective or standard. The latter uses all

available bands of the input image and the selective PCA uses only a selection of

bands which are chosen based on a priori knowledge or application purposes.

different images (Principal Component Substitution - PCS).

2) PCA of all multi-image data channels.

The first version follows the idea of increasing the spatial resolution of a

multichannel image by introducing an image with a higher resolution. The channel

which will replace PC1 is stretched to the variance and average of PC1. The higher

resolution image replaces PC1 since it contains the information which is common to

all bands while the spectral information is unique for each band.

data in one image. The image channels of the different sensor are combined into one

image file and a PCA is calculated from all the channels. The flow diagram is shown

in Figure 2.12.

2. Convert the MS bands into PC1, PC2, PC3…, by PCA transform.

3. Match the histogram of PAN to the histogram of PC

4. Replace PC1 with PAN.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

graphics because it is a simple, non-parametric method of extracting relevant

information from mystifying data sets. This technique is applied to the multispectral

bands.

1) This method is very simple to use and the images fused by this method have

high spatial quality.

2) It prevents certain features from dominating the image because of their large

digital numbers.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

2) This method is highly criticized because of the distortion of the spectral

Characteristic between the fused images and the original low resolution

Images.

Here the high frequency details are injected into up-sampled version of MS

images. The disadvantage of spatial domain approaches is that they produce spatial

distortion in the fused image. It does not give directional information and also leads to

spectral distortion, while the arithmetic combination will lose original details as a

result of low contrast of the fused image. It becomes a negative factor while we go for

further processing, such as classification problem, of the fused image.

approaches on image fusion. The multi-resolution analysis has become a very useful

tool for analyzing remote sensing images. The discrete wavelet transform has become

a very useful tool for fusion. Some other fusion methods are also there, such as

pyramid based, curvelet transform based etc. These methods show a better

performance in spatial and spectral quality of the fused image compared to other

spatial methods of fusion.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

method can be categorized into two groups

1) Multi-scale Decomposition

2) Multi-scale Geometric Analysis

contourlet method comes under Multi-scale Geometric. Our proposed Idea is on

Transform Domain methods. The general block diagram representation of spectral

domain based image fusion is shown in below Figure.2.13

significantly affect the fusion results. Image registration can also be called as image

alignment, in such a way as to align the input images as perfectly as possible in order

to produce the best fusion results. If the input image datasets are not aligned to each

other, it is impossible to obtain good fusion results although fusion framework,

scheme and algorithm are optimum. Therefore, it is necessary to align or register input

images as much as possible prior to the main fusion process.

Example: If the sizes of images vary, so before fusion, the images are needed to be

resized so both the images are of the same size. This is done by interpolating the

smaller size image by rows and columns duplication. In spectral domain the original

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

image is first transferred into frequency domain. It means that the Fourier transform

of the image is computed first.

Then the image is transferred in to frequency domain. It means that the Fourier

transform of the image is computed first. After that the image is transformed into

frequency domain using one of our proposed techniques. For the obtained image we

apply appropriate fusion rule. Now we perform inverse transformation to represent the

fused image in spatial domain.

(b) Pyramid method:-(i) Gaussian pyramid (ii) Laplacian Pyramid (iii)

Morphological pyramid (iv) Gradient pyramid (v) Ratio of low pass pyramid

(c) Wavelet transforms:- (i) Discrete wavelet transforms (DWT) (ii) Stationary

wavelet transforms (iii) Multi-wavelet transforms.

Multi geometrical image fusion:

a) Curvelet transform

b) Contourlet transform.

(a) High Pass Filtering Methods: High pass filtering methods are used for image

sharpening in frequency domain. Because edges and other abrupt changes in

intensities are associated with high frequency components, image sharpening can be

achieved in the frequency domain by High pass filtering, which attenuates low

frequency components without disturbing high frequency information in the Fourier

transform. Some of the popular Frequency Filtering methods for image sharpening are

the High-Pass Filter additive (HPFA), High Frequency Modulation (HFM).

(b) Pyramid Methods: The pyramid offers a useful image representation for a number of

tasks. It is efficient to compute: indeed pyramid filtering is faster than the equivalent

filtering done with a fast Fourier transform. The information is also available in a

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

format that is convenient to use, since the nodes in each level represent information

that is localized in both space and spatial frequency.

transforms considered in the project are as the following:

2) Gradient Pyramid

3) Laplacian Pyramid

4) Ratio Pyramid

5) Morphological Pyramid

Decomposition

Formation of the initial image for decomposition.

Recomposition

each level of the fusion. The depth of fusion or number of levels of fusion is pre

decided. The number of levels of fusion is decided based on the size of the input

image. The recomposition process, in turn, forms the finally fused image, level wise,

by merging the pyramids formed at each level to the decimated input images.

Decomposition phase basically consists of the following steps. These steps are

performed l number of times, l being the number of levels to which the fusion will be

performed.

Low Pass filtering: The different pyramidal methods have a pre defined filter

with which are the input images convolved/filtered with.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Formation of the pyramid for the level from the filtered/convolved input

images using Burt’s method or Lis method.

The input images are decimated to half their size, which would act as the input

image matrices for the next level of decomposition.

Merging the input images is performed after the decomposition process. This

resultant image matrix would act as the initial input to the recomposition process.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

developed from the pyramids formed at each level of decomposition. The various

steps involved in the recomposition phase are discussed below. These steps are

performed l number of times as in the decomposition process as shown in Figure 2.14.

The undecimated matrix is convolved/filtered with the transpose of the filter

vector used in the decomposition process

The filtered matrix is then merged, by the process of pixel intensity value

addition, with the pyramid formed at the respective level of decomposition.

The newly formed image matrix would act as the input to the next level of

recomposition.

The merged image at the final level of recomposition will be the resultant

fused image. The flow of the pyramid based image fusion can be explained by

the following example.

of Gaussian-like weighting functions. These equivalent weighting functions for three

successive pyramid levels. The convolution acts as a low-pass filter with the band

limit reduced correspondingly by one octave with each level. Because of this

resemblance to the Gaussian density function we refer to the pyramid of low-pass

images as the "Gaussian pyramid."

Laplacian pyramid is derived from the Gaussian pyramid representation, which is

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

The set of difference images between the sequential Gaussian pyramid levels, along

with the first i.e. most down-sampled level of the Gaussian pyramid, is known as the

Laplacian pyramid of an image. The difference levels are commonly referred to as the

detail levels, and the additional level as the approximation level. The Laplacian

pyramid transform is specifically designed for capturing image details over multiple

scales. Laplacian pyramid represents the edge of the image detail at every levels, so

by comparing the corresponding Laplace-level pyramid of two images, it is possible

to obtain the fused image which merge their respective outstanding detail, and makes

the integration of the image retaining the amount of information as rich as possible.

The source image is decomposed into a series of resolution spaces, and how to choose

integration factor and fusion rule will directly affect the final quality of fused image.

to improve the spatial arrange of the pixels or to distort them to extract useful features

from the subset of spatially localized pixel features. The filters designed with

morphological operators have been successfully applied in the problem of diagnosis

of brain conditions to analyze and identify tumours. The morphological operators are

used for fusing the images from multiple modalities such as CT and MR, with a

varied degree of success. The success of these operators depends on the size and

design of the structuring operator that invariably controls the opening and closing

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

operations in morphological filtering. Among many, the major operators used for

fusion are averaging, morphology towers, K-L transforms, and morphology pyramids.

The pyramid is produced by low-pass filtering the image and then sampling to

generate the next lower resolution level of the hierarchy. The basis for a

morphological pyramid requires a morphological sampling theorem. The overall

fusion strategy is shown in Figure 2.15. According to this strategy a morphological

pyramid is first produced for each of the input images. Then a morphological

difference pyramid is constructed, for each of the above pyramids. This is achieved by

taking the differences between the morphological images residing at successive levels

in the original pyramid. An intermediate pyramid is constructed. Combining

information from the two difference pyramids at each level. Finally, reconstruction of

the intermediate pyramid, using appropriate morphological operations, produces the

required fused image.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

image. The process which generates each image from its predecessor is called a

Pyramid Construction (PC) operation, which is known as a reducing operation, since

both resolution and sampling density are decreased. After the PC operation is applied,

the new morphological image is sampled to generate the next level of the pyramid.

This process is repeated to construct two pyramids, one for the MR data and one for

the CT data. As well as being fiat, the structuring element, K, is also symmetric and it

is used at each level during the pyramid construction process. St is the sampling

lattice corresponding to the level t of the pyramid.

also be applied to fuse image data following the concept of the multi-resolution

analysis (MRA). Another application is the automatic geometric registration of

images, one of the pre-requisites to pixel based image fusion. The wavelet transform

creates a summation of elementary functions (wavelets) from arbitrary functions of

finite energy. The weights assigned to the wavelets are the wavelet coefficients which

play an important role in the determination of structure characteristics at a certain

scale in a certain location. The interpretation of structures or image details depend on

the image scale which is hierarchically compiled in a pyramid produced during the

MRA.

differences between successive images provided by the MRA. Once the wavelet

coefficients are determined for the two images of different spatial resolution, a

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

the lower resolution image. Using these it is possible to create a synthetic image from

the lower resolution image at the higher spatial resolution. This image contains the

preserved spectral information with the higher resolution, hence showing more spatial

detail.

image decomposition tool that provide a variety of channels representing the image

features by different frequency sub-bands at multi-scale. It is a famous technique in

analysing signals.

can be separated 2-D Discrete Wavelet Transformation (DWT) converts the image

from the spatial domain to frequency domain. The image is divided by vertical and

horizontal lines and represents the first-order of DWT, and the image can be separated

with four parts those are LL, LH, HL and HH.

First the samples are passed through a low pass filter with impulse response g

resulting in a convolution of the two:

∞

y [ n ] =( x∗g ) [ n ] = ∑ x [ k ] g [n−k ] (2.6)

k=−∞

outputs giving the detail coefficients (from the high-pass filter) and approximation

coefficients (from the low-pass). It is important that the two filters are related to each

other and they are known as a quadrature mirror filter.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

However, since half the frequencies of the signal have now been removed, half

the samples can be discarded according to Nyquist’s rule. The filter outputs are then

sub-sampled by 2 (It should be noted that Mallat's and the common notation is the

opposite, g- high pass and h- low pass):

∞

y [ n ] ∨low= ∑ x [ k ] g [2 n−k ] (2.7)

k=−∞

∞

y [ n ] ∨high= ∑ x [ k ] h[2 n−k ] (2.8)

k=−∞

This decomposition has halved the time resolution since only half of each filter

output characterizes the signal. However, each output has half the frequency band of

the input so the frequency resolution has been doubled. The 2D multi resolution

wavelet decomposition shown in Figure 2.16.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

y│low = (x*g)2

sampling would waste computation time. The Lifting scheme is an optimization

where these two computations are interleaved.

the approximation coefficients decomposed with high and low pass filters and then

down- sampled. The three-level filter bank is known in Figure 2.17.

At each level in the above diagram the signal is decomposed into low and high

frequencies. Due to the decomposition process the input signal must be a multiple of

2n where n is the number of levels. General frame work for DWT is shown in Figure

2.18.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

p

DWT

(i, j) = ∑ ∑ (DWT (i , j))2

(2.11)

Step 1: Implement DWT on both the input image to create wavelet lower

decomposition.

Step 3: Carry inverse discrete wavelet transform on fused decomposed level, which

means to reconstruct the image, while the image reconstructed is the fused image

1) Curvelet transform.

2) Contourlet transform.

scale object representation. Curvelet transform is also a multi-resolution

decomposition technique. The 2D-FFT is applied on images to obtain the Fourier

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

samples. The Fourier samples are wrapped around the origin. Finally the image is

reconstructed by performing the inverse FFT transform.

superimposed functions of various lengths and widths. The curvelet transform, unlike

wavelet transform, is a multi-scale transforms, but, unlike wavelets, contains

directional elements. Curvelets are based on multi-scale ridge lets with a band pass

filtering to separate image into disjoint scales. The side length of the localizing

windows is doubled at every other dyadic sub-band. The steps that are being followed

by the Curvelet Transform Process are explained with the help of flow diagram as

shown below in Figure 2.19. But it gives limited directionality i.e., 00 , 900 ,

0

180 , 2700 .

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

2) Contourlet-based Fusion:

parameter n-levels given in the DFB stage decomposition where n-levels are one-

dimensional vector. The parameter, n-levels is used to store the parameters of the

decomposition level of each level of pyramid for DFB. If the parameter of the

decomposition level is 0 for DFB, DFB will use the wavelet to process the sub-image

of pyramid. If the parameter is lj, the decomposition level of DFB is 2lj , which

means that the sub-image is divided into 2lj directions. Corresponding to the vector

parameter n-levels, the coefficient Y of the contourlet decomposition is a vector too.

The length of Y is equal to the length (n-levels) +1. Y{1} is the sub-image of the low

frequency. Y{i}(i = 2,... Len) is the directional sub-image obtained by DFB

decomposition, where i denotes the i-th level pyramid decomposition.

coefficients of two or more source images using a certain fusion algorithm. Then, the

inverse transform is performed on the combined coefficients resulting in the fused

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

2.20, where Image 1 and Image 2 denote the input images, CT represents the

contourlet transform, and Image F is the final fused image.

method Wavelet-Curvelet transform is presented.

CHAPTER 3

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

point or the other. Therefore their exists the need of developing a method which takes

into consideration the advantages of various different fusion rules. Thus the hybrid

image fusion is used. It performs processing of the image based upon the different

fusion rules and then integrates these results together to obtain a single image. The

results of various fusion techniques are extracted and then they are again fused by

implementing a hybrid method presenting better quality results. A single method may

not effectively result in removing the ringing artifacts and the noise in the source

images. These inadequacies result in development of fusion rules which follow a

hybrid algorithm and improve to great extent the visual quality of the image.

Therefore Hybrid Image fusion leads to minimum Mean Square Error Value and

maximum Signal to Noise (S/N) Ratio value.

The existing method which is hybrid of two methods that is the wavelet based

image fusion and the curvelet based image fusion (hybrid of wavelet and curvelet

fusion rules). Curvelet based image fusion efficiently deals with the curved shapes,

therefore its application in medical fields would result in better fusion results than

obtained using wavelet transform alone.

multispectral images as compared to any other fusion rule. It increases the frequency

resolution of the image by decomposing it to various bands again and again till

different frequencies and resolutions are obtained. Thus a hybrid of wavelet and

curvelet would lead to better results as compare to previously existed methods. The

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

flow diagram of existed method with is the combination of wavelet and curvelet

transformation is shown in Figure 3.1.

The flow diagram shows procedure of combining image 1 and image 2 into

single fused wavelet coefficients. These bands obtained are then passed through

curvelet transform which segments it into various additive components each of which

is sub-band of the image. These bands are then passed through tiling operation which

divides the band to overlapping tiles.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

A hybrid of wavelet and curvelet integrates various pixel level rules in a single

fused image. Pixel based rules operates on individual pixels in the image but ignores

some important details such as edges, boundaries of the image. Wavelet based rule

alone may reduce the contrast in some images and cannot effectively remove the

ringing effects and noise appearing in the source images. Curvelet method can work

well with edges and boundaries and curve portions of the images using Ridgelet

transforms.

In the hybrid method first the decomposition of the input images is done up to

level N by passing the image through series of low and high pass filters. The low and

high pass bands are then subjected to curvelet transform by decomposing it further

into small tiles and then fused using wavelet transform and inverse wavelet transform

to get full size images. This will take into account the drawbacks of wavelet and

effectively remove it using curvelet transform and visual quality of the image is

improved. Wavelet transform of an image up to level N till different resolution is

obtained. This gives various frequency bands. In chapter 4 we discussed about the

operation of wavelet and curvelet transform in detail. There we understand clearly

how the medical images are fused using these transformations.

Step 2: These images are set to pre-processing which includes RGB to gray scale

conversation and also ensured image alignment.

Step 3: The images obtained from step-2 are first decomposed using Discrete Wavelet

Transform (WT).

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Step 4: We get fused image in wavelet domain using following fusion rules.

2) Fuse detail coefficients of source image using Maxima method.

Step 6: Apply contourlet transform on the fused image which is obtained in step-5.

Step 8: To get final hybrid fused image in spatial domain we apply inverse contourlet

transform.

better fusion results compare to previous spatial domain transformation techniques as

we already discussed in previous chapter 2. Existing method can efficiently captures

curved information and also provide multi resolution, but it fails to provide

directionality and anisotropy information which is very important in medical

diagnosis. For that reason to overcome this drawback in chapter 4 we discuss

proposed two hybrid multi modal medical image fusion using different combinations

of curvelet, wavelet, and contourlet transformations in order to come out with best

combination which yield best fused image which provide better diagnosis.

is explained. In next chapter implementation of proposed methods Wavelet-

Contourlet, Curvelet-Contourlet transforms are explained are presented.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

CHAPTER 4

PROPOSED METHODOLOGIES

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

and Contourlet transform where limitation of directionality in wavelet transform is

overcome by Contourlet transform.

to the main fusion process in order to produce the best fusion results. Image

registration process can also be called as image alignment. The datasets of input

images are not aligned to each other, it is impossible to yield best fusion results

although fusion framework, scheme and algorithm are optimum.

After pre-processing these source images are first decomposed using Wavelet

Transform (WT) in order to realize multi-scale sub band decompositions with no

redundancy. These sub-bands coefficients are predominantly low and the high

frequency sub-bands of the image. Now, the obtained approximation and detailed

coefficients after application of appropriate fusion rule are reconstructed using the

inverse DWT. The entire process carried out in this stage serves to provide significant

localization leading to a better preservation of features in the fused image.

Contourlet domain. The significance of such an approach is to overcome the

limitation of directionality in wavelets (in stage-1). Contourlet transform (CT) is

applied in order to achieve angular decompositions.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

obtained (for both the images). These frequency coefficients are fused together based

on certain fusion algorithms are reconstructed using the inverse CT. The schematic

diagram of our proposed methodology which is the hybrid image fusion using

wavelet-contourlet transform is shown in Figure 4.1.

transform works efficiently with multi-focus, multispectral images as compared to

any other fusion rule. It increases the frequency resolution of the image by

decomposing it to various bands again and again till different frequencies and

resolutions are obtained. Thus a hybrid of wavelet and contourlet would lead to better

results that could be used for medical diagnosis when compared with Existing

methods those are individual results of wavelet and contourlet transform.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

transform to digital world. Filter banks are used to approximate the behaviour of the

continuous wavelet transform. The coefficients of these filters are computed using

mathematical analysis. The general block diagram is shown below in Figure 4.2.

DWT comes under the classification of multi-scale decomposition. This is used to

map the wavelet transform to digital world. Filter banks are used to approximate the

behaviour of the continuous wavelet transform. Double channel filter bank is used in

discrete wavelet transforms (DWT). The coefficients of these filters are evaluated

using mathematical analysis. The wavelet transform is used to identify local features

in an image. It also used for decomposition of two dimensional (2D) signals such as

2D gray-scale image for multi-resolution analysis. The available filter banks

decompose the image into two different components i.e. high- and low- frequency.

When decomposition is carried out, the approximation and detail components can be

separated 2-D Discrete Wavelet Transformation (DWT) converts the image from the

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

spatial domain to transform domain. The image is divided by vertical and horizontal

lines and represents the first-order of DWT, and the image can be separated with four

parts those are LL1, LH1, HL1 and HH1.

Wavelet separately filters and down samples the 2-D data (image) in the

vertical and horizontal directions (separable filter bank). The input (source) image is

I(x, y) filtered by low pass filter L and high pass filter H in horizontal direction and

then down sampled by a factor of two (keeping the alternative sample) to create the

coefficient matrices IL(x, y) and IH(x, y) .

The coefficient matrices IL (x, y) and IH(x, y) are both low pass and high pass

filtered in vertical direction and down sampled by a factor of two to create sub bands

(sub images) ILL(x, y) , ILH (x, y) , IHL(x, y), and IHH (x, y). Wavelet decomposition can

be implemented by two channel filter bank shown in Figure 4.3.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

The Discrete Wavelet Transform has the property that the spatial resolution is

small in low-frequency bands but large in high frequency bands. This is because the

scaling function is treated as a low pass filter and the mother wavelet as high pass

filter in DWT implementation. The wavelet transform decomposition and

reconstruction take place column and row wise. Firstly row by row decomposition is

performed and then column by column. This can be shown in Figure 4.4.

The ILL(x, y) sub-band is the original image at the coarser resolution level,

which can be considered as a smoothed and sub-sampled version of the original

image. Most information of their source images is kept in the low frequency sub-

band. It represents the frequency usually contains slowly varying grey value

information in an image so called approximation.

The ILH (x, y), IHL(x, y) and IHH(x, y) are sub-bands contain the detail

coefficients of an image, which usually have large absolute values correspond to sharp

intensity changes and preserve salient information in the image.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

1, 2, 3 - - - Decomposition Levels

There are different levels of decomposition which are shown in Figure 4.5.

After one level of decomposition, there will be four frequency bands, as listed above.

By recursively applying the same scheme to the LL sub-band a multi-resolution

decomposition with a desires level can then be achieved.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

The schematic diagram of wavelet- based image fusion is shown in Figure 4.6.

In wavelet image fusion scheme, the source images I1(x, y) and I2(x, y), are

decomposed into approximation and detailed coefficients at required level using

DWT. Two images, I1(X1, X2) and I2(X1, X2) are registered. Wavelet transform is

applied on two images. It can be represented by the following equation,

using the fusion rule. IDWT is applied on the fused wavelet coefficients to obtain the

fused image If(X1, X2) given by

Where W −1 and φ are the Inverse Discrete Wavelet Transform operator and fusion

operator. There is several wavelet fusion rules that can be used for the selection of the

wavelet coefficients from the wavelet transform of the images to be fused. The most

frequently used rule is the maximum frequency rule which selects the coefficients that

have the maximum absolute values. The Wavelet Transform concentrates on

representing the image in multi-scales and it’s appropriate to represent linear edges.

The multi-level image fusion using DWT is shown in Figure 4.6.

Fused Wavelet

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Registered Wavelet

Source images coefficients

The fusion rule used in this paper is simply averages the approximation

coefficients and picks the detailed coefficient in each sub band with the largest

magnitude. Thus, N-level decomposition will finally have 3N+1 different frequency

bands, which include 3N high frequency bands and just one LL frequency band. This

decomposition carried out until desired resolution is reached. It depends upon the

ratio of spatial resolution of the image.

Step 1: The images to be fused must be registered to assure that the corresponding

pixels are aligned.

Step 2: These images are decomposed into wavelet transformed images, respectively,

based on wavelet transformation. The transformed images with K -level

decomposition will finally have 3K+1 different frequency bands, which

include one low-frequency portion (ILL) and 3K high-frequency portions

(low-high bands, high-low bands, and high-high bands).

Step 3: The transform coefficients of different portions or bands are performed with a

certain fusion rule.

b) Fuse detail coefficients of source image using Maxima method.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

wavelet transform based on the combined transform coefficients from Step 3.

2) Provides Multi-scale decomposition.

2) Limited orientation (vertical, Horizontal and Diagonal).

representations. The Contourlet transform has properties of

approximated, from coarse to fine resolutions.

2) Localization: The basic elements in the representation should be localized in both

the spatial and the frequency domains.

3) Critical sampling: For some applications (e.g., compression), the representation

should form a basis, or a frame with small redundancy.

4) Directionality: The representation should contain basis elements oriented at a

variety of directions, much more than the few directions that are offered by

separable wavelets.

5) Anisotropy: To capture smooth contours in images, the representation should

contain basis elements using a variety of elongated shapes with different aspect

ratios.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

A. Need for contourlet transform: Among these desiderata, the first three are

successfully provided by separable wavelets, while the last two require new

constructions. Moreover, a major challenge in capturing geometry and directionality

in images comes from the discrete nature of the data. For this reason we construct

multi-resolution and multi-direction image expansion using non-separable filter

banks.

directional image expansion using contour segments, and thus it is named the

contourlet transform. It is of interest to study the limit behaviour when such schemes

are iterated over scale and/or direction, which has been analyzed in the connection

between filter banks, their iteration, and the associated wavelet construction. The

general block diagram is shown below in Figure 4.7. The contours of original images

can be captured effectively with a few coefficients by using Contourlet transform.

decomposition. It consists of two stages

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Laplacian Pyramid

Directional Filter Bank

The overall result is an image expansion using basic elements like contour

segments, and thus is named contourlet. In particular, contourlet have elongated

supports at various scales, directions and aspect ratios. This allows contourlets to

efficiently approximate a smooth contour at multiple resolutions.

4.3.1 Laplacian Pyramid: In the image is first decomposed into four sub images and

also captures the point discontinuities in those images. The Figure 4.8 shows general

representation of Laplacian pyramid

decomposed into four sub images. It separates low frequency components and high

frequency components. The LP decomposition at each level generates a down

sampled low pass version of the original and the difference between the original and

the prediction, resulting in a band pass image as shown in Figure 4.9.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

down sampling to obtain Low pass version of the original image or blur image and

then the obtained low-pass image is separated from the original image by up-sampling

to obtain band-pass image (or High Frequency components). This band-pass image is

applied to directional filter bank to obtain directionality information. The

decomposition process in Laplacian pyramid is implemented as shown below Figure

4.10.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

This is the process that, Laplacian pyramids separates low frequency and high

frequency components. The obtained LF Sub-band (Scaled) image is further

decomposed to get the desired fused image.

4.3.2 Directional Filter Bank:

1) The High frequency components are given to the directional filter bank which links

point discontinuities into linear structure.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

2) The High pass sub-band images are applied to Directional filter bank to further

decompose the frequency spectrum using an n-level iterated tree structured filter

banks as shown in Figure 4.11.

3) By doing this we capture smooth contours and edges at any orientation. Finally we

combine the scaled information with scaled multiplication. Since the directional

filter bank (DFB) was designed to capture the high frequency (representing

directionality) of the input image, the low frequency content is poorly handled.

4) In fact, with the frequency partition low frequency would “leak” into several

directional sub-bands, hence the DFB alone does not provide a sparse representation

for images. This fact provides another reason to combine the DFB with a multi-scale

decomposition as shown in Figure 4.12., where low frequencies of the input image

are removed before applying the DFB.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

The fusion framework used in the experiments is shown in Figure 4.13. First,

source images are decomposed into multi-scale and multi-directional components

using contourlet transform, and these components are fused together based on a

certain fusion scheme. Next, inverse contourlet transform is performed in order to

obtain a final fused image.

The source images are fused according to the fusion scheme and fusion rule that are

described as follows in Figure 4.14.

obtain multi-scale or multi-directional frequency coefficients. For each decomposition

level K, K approximation sub-band and 3K detail sub-bands are produced. In our

experiments, decomposition level of 3 was used since the level beyond 3 significantly

degraded the fusion performance.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Once the source images are decomposed, high frequency components are

selected from the PAN source image and then injected into detail sub-bands of the MS

source image via maximum frequency fusion rule which compares and selects the

frequency coefficient with the highest absolute value at each pixel.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Step 2: These images are set to pre-processing which includes RGB to gray scale

conversation and also ensured image alignment.

Step 3: The images obtained from step-2 are first decomposed using Discrete Wavelet

Transform (WT).

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Step 4: We get fused image in wavelet domain using following fusion rules.

2) Fuse detail coefficients of source image using Maxima method.

Step 6: Apply contourlet transform on the fused image which is obtained in step-5.

Step 8: To get final hybrid fused image in spatial domain we apply inverse contourlet

transform.

Contourlet transform where limitation of directionality in wavelet transform is

overcome by Contourlet transform.

to the main fusion process in order to produce the best fusion results. Image

registration process can also be called as image alignment. The datasets of input

images are not aligned to each other, it is impossible to yield best fusion results

although fusion framework, scheme and algorithm are optimum.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

After pre-processing these source images are first decomposed using Curvelet

Transform (WT) in order to isolate different frequency components present of the

image into different planes without down sampling as in the traditional wavelet

transform. It efficiently deals with the curved shapes; therefore its application in

medical fields would result in better fusion results than obtained using wavelet

transform alone. Curvelet method can work well with edges and boundaries and curve

portions of the images using Ridgelet transforms. Now, the obtained approximation

and detailed coefficients after application of appropriate fusion rule are reconstructed

using Curvelet transform. The entire process carried out in this stage serves to provide

significant localization leading to a better preservation of features in the fused image.

Contourlet domain. The significance of such an approach is to overcome the

limitation of directionality in curvelet (in stage-1). Contourlet transform (CT) is

applied in order to achieve angular decompositions. After applying sub-band

decomposition using CT, a set of coefficients are obtained (for both the images).

algorithms are reconstructed using the inverse CT. The schematic diagram of our

proposed methodology which is the hybrid image fusion using wavelet-contourlet

transform is shown in Figure 4.16.

Contourlet Transform (CT) works with two dimensional multi scale and directional

filter bank (DFB). In addition CT also uses iterated filter bank which makes it

computationally efficient. The perfect directional basis for discrete signals is created

with the help of DFB which was major drawback of the wavelet transformation.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Curvelet transformation. Both algorithms have their own features and limitation.

Contourlet offer high degree of directionality and anisotropy where as Curvelet

transformation is effective in images which have bounded curves as it provides

smoothening of curves. So hybrid of Contourlet and Curvelet transformation would

lead to better results that could be used for fusion of medical images when compared

with previous methods i.e. wavelet-curvelet, proposed method1.

applications, edge detection and image de-noising.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Need for Curvelet: The wavelet transform concentrates on representing the image in

Multi-scales and it’s appropriate to represent linear edges. For curved edges, the

accuracy of edge localization in the wavelet transform is low. So, there is a need for

an alternative approach which has a high accuracy of curve localization such as the

curvelet transform.

4.8.1 Image Fusion by Discrete Curvelet Transform Method:

2) Tiling

3) Ridgelet Transforms

1. Sub band filtering: The purpose of this step is to decompose the image into additive

components, each of which is a Sub band of that image. This step isolates the different

frequency components of the image into different planes without down sampling as in

the traditional wavelet transform.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

filter and P1, P2, P3 etc are the band-pass (high-pass) filters. So the original image

can be reconstructed from the sub-bands by the following equation;

s

2. Tiling: Tiling is the process by which the image is divided into overlapping tiles. It

follows the sub-band decomposition, each of the sub-band filtered image is then

partitioned into blocks of NxN (N- blocks in horizontal direction & N-blocks in

vertical direction). These tiles are small in dimensions to transform curved lines into

small straight lines in the sub bands P1and P2. The tiling improves the ability of the

curvelet transform to handle curved edges. As observing in Figure 4.18.

3. Renormalization: Renormalization is nothing but centering each dyadic square to the

unit square [0, 1]x[0, 1].

4. Ridgelet Analysis: Before the Ridgelet Transform we need to perform the Ridgelet

tiling. The renormalized ridges have an aspect ratio of width= length2 . Now these

ridges can be encoded ridges efficiently using the Ridgelet Transform.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

This transform is primarily a tool of ridge detection or shape detection of the objects

in an image. Ridgelet Transform divides the frequency domain to dyadic coronae It

samples the s-th corona at least 2s times in the angular direction, whereas in the

radial direction, it samples using local wavelets. As shown in Figure 4.19.

2) Divide FFT into collection of tiles

3) For each tile apply

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

a) Translate tile to the origin.

b) Wrap parallelogram shaped support of tile around the rectangle with centre as

the origin as shown in Figure 4.20.

c) Take inverse FFT of wrapped one

d) Add curvelet array to collection of curvelet coefficients.

e) Merit:

f) Captures Curved edges more efficiently than DWT is shown in Figure 4.21.

2) Shift variant

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

smooth contours at different orientations.

The curvelet transform was developed initially in the continuous domain via

multi-scale filtering and then applying a block Ridgelet transform on each band-pass

image. Later, the second generation curvelet transform was proposed that defined

directly via frequency partitioning without using the Ridgelet transform. Both curvelet

constructions require a rotation operation and correspond to a 2-D frequency partition

based on the polar coordinate. This makes the curvelet construction simple in the

continuous domain but causes difficulty in implementation for discrete images. So

that we go for contourlet transform.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Step 2: These images are set to pre-processing which includes RGB to gray scale

conversation and also ensured image alignment.

Step 3: The images obtained from step-2 are first decomposed using Curvelet

Transform to estimate the coefficients. Curvelet transformation has four stages

1. Sub-band decomposition

2. Smooth Partitioning

3. Renormalization

4. Ridgelet Analysis

Step 4: We get fused image in wavelet domain using following fusion rules.

2) Fuse detail coefficients of source image using Maxima method.

Step 6: Apply contourlet transform on the fused image which is obtained in step-5.

Step 8: To get final hybrid fused image in spatial domain we apply inverse contourlet

transform.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

integrates computation, visualization, and programming in an easy-to-use

environment where problems and solutions are expressed in familiar mathematical

notation. MATLAB stands for matrix laboratory, and was written originally to provide

easy access to matrix software developed by LINPACK (linear system package) and

EISPACK (Eigen system package) projects. MATLAB is therefore built on a

foundation of sophisticated matrix software in which the basic element is array that

does not require pre dimensioning which to solve many technical computing

problems, especially those with matrix and vector formulations, in a fraction of time.

Very important to most users of MATLAB, toolboxes allow learning and applying

specialized technology. These are comprehensive collections of MATLAB functions

(M-files) that extend the MATLAB environment to solve particular classes of

problems. Areas in which toolboxes are available include signal processing, control

system, neural networks, fuzzy logic, wavelets, curvelets and contourlets simulation

and many others.

The basic building block of MATLAB is MATRIX. The fundamental data type

is the array. Vectors, scalars, real matrices and complex matrix are handled as specific

class of this basic data type. The built in functions are optimized for vector operations.

No dimension statements are required for vectors or arrays.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

window, Workspace window, Current directory window, Command history window,

Editor Window, Graphics window and Online-help window.

Command Window: The command window is where the user types MATLAB

commands and expressions at the prompt (>>) and where the output of those

commands is displayed. It is opened when the application program is launched. All

commands including user-written programs are typed in this window at MATLAB

prompt for execution.

Work Space Window: MATLAB defines the workspace as the set of variables that

the user creates in a work session. The workspace browser shows these variables and

some information about them. Double clicking on a variable in the workspace

browser launches the Array Editor, which can be used to obtain information.

Current Directory Window: The current Directory tab shows the contents of the

current directory, whose path is shown in the current directory window. For example,

in the windows operating system the path might be as follows: C:\MATLAB\Work,

indicating that directory “work” is a subdirectory of the main directory “MATLAB”;

which is installed in drive C. Clicking on the arrow in the current directory window

shows a list of recently used paths.

MATLAB uses a search path to find M-files and other MATLAB related files.

Any file run in MATLAB must reside in the current directory or in a directory that is

on search path.

the commands a user has entered in the command window, including both current and

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

and re-executed from the command history window by right clicking on a command

or sequence of commands. This is useful to select various options in addition to

executing the commands and is useful feature when experimenting with various

commands in a work session.

Editor Window: The MATLAB editor is both a text editor specialized for creating

M-files and a graphical MATLAB debugger. The editor can appear in a window by

itself, or it can be a sub window in the desktop. In this window one can write, edit,

create and save programs in files called M-files.

MATLAB editor window has numerous pull-down menus for tasks such as

saving, viewing, and debugging files. Because it performs some simple checks and

also uses color to differentiate between various elements of code, this text editor is

recommended as the tool of choice for writing and editing M-functions.

Graphics or Figure Window: The output of all graphic commands typed in the

command window is seen in this window.

Online Help Window: MATLAB provides online help for all it’s built in functions

and programming language constructs. The principal way to get help online is to use

the MATLAB help browser, opened as a separate window either by clicking on the

question mark symbol (?) on the desktop toolbar, or by typing help browser at the

prompt in the command window.

The help Browser is a web browser integrated into the MATLAB desktop that

displays a Hypertext Mark-up Language (HTML) documents. The Help Browser

consists of two panes, the help navigator pane, used to find information, and the

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

display pane, used to view the information. Self-explanatory tabs other than navigator

pane are used to perform a search.

Syntax: clc

Uigetfile: Displays a modal dialog box that lists files in the current directory and

enables the user to select or type the name of a file to be opened. If the filename is

valid and if the file exists, uigetfile returns the filename when the user clicks Open.

Syntax=uigetfile (FilterSpec,DialogTitle,DefaultName)

Imread: This command reads the image from the file specified by filename with the

standard file extension indicated by file type as given below:

greyscale, RGB (true color), or binary image. For binary images, imshow displays

pixels with the value 0 (zero) as black and 1 as white.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Imresize: This command resizes the image of any type using this specified

interpolation method.

Syntax: figure(I)

Syntax: im2double(image)

Syntax: zeros(n)

Syntax: ones(n)

Size: This command returns the sizes of each dimension of array x in a vector d with

ndims(x).

Syntax: d=size(x)

Syntax: i=rgb2gray(i)

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Syntax: imfuse(A,B)

Contourlet transforms are explained. In next chapter simulation results, comparisons

of performance metrics of various proposed methods with existing method are

presented.

CHAPTER 5

Wavelet Transform and Contourlet Transform for various medical images obtained

from different modalities

Example 1: Fusion of CT and MRI: Image Size [256 X 256] having tumour in

brain

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

In Figure 5.1 we considered CT and MRI images of brain with tumour. The

first one is the input source image which is obtained from CT scan of brain. CT

provides more accurate information about calcium deposit, air, bones, and any

blockages. The second input image which is considered as a reference which is

obtained from MRI scan of brain. MRI provides information about Nerve system, soft

tissues and muscles. We have applied Discrete Wavelet Transform (DWT) on the

images which then followed by contourlet transform.

We fuse these images using appropriate fusion rules which we have already

discussed in previous chapters. The result we obtained from contourlet transform

which will be in frequency domain. We need to reconstruct final hybrid fused image

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

in spatial domain, for that we apply inverse wavelet transform followed by inverse

contourlet transform.

Example 2: Fusion of MRI and PET: Image Size [256 X 256] for FTD (Neuron

Degeneration)

Degeneration. This Neuron Degradation can be seen in aged people. The first one is

the input source image which is obtained from MRI scan of brain. The second input

image which is considered as a reference which is obtained from PET scan of brain.

PET can be used to provide better information on blood flow and flood activity with

low spatial resolution. As a result, the anatomical and functional medical images are

needed to be combined for a compendious view.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

then followed by contourlet transform. We fuse these images using appropriate fusion

rules which we have already discussed in previous chapters. The result we obtained

from contourlet transform which will be in frequency domain. We need to reconstruct

final hybrid fused image in spatial domain, for that we apply inverse wavelet

transform followed by inverse contourlet transform.

Example 3: Fusion of CT and MRI: Image Size [512 X512] of skull from top view

In Figure 5.3 we considered CT and MRI images of skull. The first one is the

input source image which is obtained from CT scan of skull. The second input image

which is considered as a reference which is obtained from MRI scan of skull. We have

applied Discrete Wavelet Transform (DWT) on the images which then followed by

contourlet transform.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

We fuse these images using appropriate fusion rules. The result we obtained

from contourlet transform which will be in frequency domain. We need to reconstruct

final hybrid fused image in spatial domain, for that we apply inverse wavelet

transform followed by inverse contourlet transform.

Example 4: Fusion of MRI-T1 and MRI –T2: Image Size [256x 256] of brain

(cholesterol)

In Figure 5.4 we considered CT and MRI images of brain which shows the

cholesterol level. The first one is the input source image which is obtained from CT

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

scan of brain. The second input image which is considered as a reference which is

obtained from MRI scan of brain. We have applied Discrete Wavelet Transform

(DWT) on the images which then followed by contourlet transform. We fuse these

images using appropriate fusion rules

domain. We need to reconstruct final hybrid fused image in spatial domain, for that

we apply inverse wavelet transform followed by inverse contourlet transform.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

In Figure 5.5 we considered CT and MRI images of brain head. The first one is

the input source image which is obtained from CT scan of head. The second input

image which is considered as a reference which is obtained from MRI scan of brain.

We have applied Discrete Wavelet Transform (DWT) on the images which then

followed by contourlet transform.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

We fuse these images using appropriate fusion rules. The result we obtained

from contourlet transform which will be in frequency domain. We need to reconstruct

final hybrid fused image in spatial domain, for that we apply inverse wavelet

transform followed by inverse contourlet transform.

5.2 Simulation Results of Curvelet – Contourlet Transform : In

this section we are discussing the simulation results of hybrid Curvelet

Transform and Contourlet Transform for various medical images obtained

from different modalities

Example 1: Fusion of CT and MRI: IMAGE SIZE [256 x256] having tumour in

brain

In Figure 5.6 we considered CT and MRI images of brain with tumour. The

first one is the input source image which is obtained from CT scan of brain. CT

provides more accurate information about calcium deposit, air, bones, and any

blockages. The second input image which is considered as a reference which is

obtained from MRI scan of brain. MRI provides information about Nerve system, soft

tissues and muscles.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

contourlet transform. We fuse these images using appropriate fusion rules which we

have already discussed in previous chapters. The result we obtained from contourlet

transform which will be in frequency domain. We need to reconstruct final hybrid

fused image in spatial domain, for that we apply inverse contourlet transform. Here

there is no need of inverse curvelet transform as it will be performed internally.

Example 2: Fusion of MRI and PET: Image Size [256 x256] for FTD (Neuron De-

Generation)

Degeneration. This Neuron Degradation can be seen in aged people. The first one is

the input source image which is obtained from MRI scan of brain. The second input

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

image which is considered as a reference which is obtained from PET scan of brain.

PET can be used to provide better information on blood flow and flood activity with

low spatial resolution. As a result, the anatomical and functional medical images are

needed to be combined for a compendious view.

contourlet transform. We fuse these images using appropriate fusion rules which we

have already discussed in previous chapters. The result we obtained from contourlet

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

fused image in spatial domain, for that we apply inverse contourlet transform.

In Figure 5.8 we considered CT and MRI images of skull. The first one is the

input source image which is obtained from CT scan of skull. The second input image

which is considered as a reference which is obtained from MRI scan of skull. We have

applied curvelet transform on the images which then followed by contourlet

transform.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

We fuse these images using appropriate fusion rules. The result we obtained

from contourlet transform which will be in frequency domain. We need to reconstruct

final hybrid fused image in spatial domain, for that we apply inverse contourlet

transform.

Example 4: Fusion of MRI-T1 and MRI –T2: IMAGE SIZE [256x 256] of brain

(cholesterol)

In Figure 5.9 we considered two images that are obtained from MRI Scans;

one is MRI_T1 and the second one MRI_T2. MRI_T1-weighted imaging is used to

differentiate anatomical structures mainly on the basis of T1 values.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Tissues with high fat content (e.g. white matter) appear bright and

compartments filled with water (e.g. CSF) appear dark. This is good for

demonstrating anatomy. MRI_T2 is vice versa. We have applied curvelet transform on

the images which then followed by contourlet transform. We fuse these images using

appropriate fusion rules. The result we obtained from contourlet transform which will

be in frequency domain. We need to reconstruct final hybrid fused image in spatial

domain, for that we apply inverse contourlet transform.

In Figure 5.10 we considered CT and MRI images of head. The first one is the

input source image which is obtained from CT scan of head. The second input image

which is considered as a reference which is obtained from MRI scan of brain.

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

contourlet transform. We fuse these images using appropriate fusion rules. The result

we obtained from contourlet transform which will be in frequency domain. We need

to reconstruct final hybrid fused image in spatial domain, for that we apply inverse

contourlet transform

In this section we will check the effectiveness of the proposed schemes that is

hybrid of the DWT-Contourlet, Curvelet- Contourlet transformation. Various

parameters such as Entropy, PSNR, and MSE are used to evaluate the effectiveness

and compared the performance metrics among the proposed methods and existed one.

We assume the source images to be in perfect registration. Here we consider different

source images like CT, MRI, and PET of brain tumour, Skull, MRI_T1, MRI_T2, and

Alzheimer’s disease (which is widely seen in aged people).

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

(a)CT-SKULL(b)MRI-SKULL(c)DWT-Curvelet(d)DWT-contourlet(e)Curvelet-Contourlet

(f)CT-TUMOR (g)MRI-TUMOR(h)DWT-Curvelet(i)DWT-contourlet(j)Curvelet-Contourlet

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

(p) MRI_T1 (q) MRI-T2 (r) DWT-Curvelet (s) DWT-Contourlet (t) Curvelet-Contourlet

The above images shown in Figure 5.11 gives the comparison of results obtained

from different techniques. The Figure (a) and (b) gives bone and tissues information

of skull, by fusing these images we obtained complete information using appropriate

hybrid technique. Here Figure (c), (d) and (e) represents existed and proposed

methods. Similarly the Figure (f) and (g) gives bone and tissues information of brain

having tumour, by fusing these images we obtained exact location of tumour using

appropriate hybrid technique. Here Figure (h), (i) and (j) represents existed and

proposed methods. Similarly the Figure (k) and (l) gives soft tissues and PET can be

used to provide better information on blood flow and flood activity with information

of brain, by fusing these images we obtained exact location of tumour using

appropriate hybrid technique. Here Figure (m), (n) and (o) represents existed and

proposed methods.

Similarly the Figure (p) and (q) gives MRI_T1-weighted imaging is used to

differentiate anatomical structures mainly on the basis of T1 values. Tissues with high

fat content (e.g. white matter) appear bright and compartments filled with water (e.g.

CSF) appear dark. This is good for demonstrating anatomy. MRI_T2 is vice versa. By

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

hybrid technique. Here Figure (r), (s) and (t) represents existed and proposed

methods. The Figure (u) and (v) gives bone and tissues information of head, by fusing

these images we obtained complete information using appropriate hybrid technique.

Here Figure (w), (x) and (y) represents existed and proposed methods. From these

results we can observe that the output of Curvelet-contourlet is the appropriate one for

better diagnosis as we could see the features are more visible than remaining methods.

The drawback of the existed methods such as wavelet, curvelet is that they don’t

provide directionality. This can be obtained by our proposed methods. Even though

we got better results with the proposed method-1 (Wavelet- Contourlet), it doesn’t

capture curved information efficiently even though it provides directionality. This

drawback from this can be overcome by curvelet-contourlet transform which provides

both directionality and captures curved information efficiently. Hence the curvelet-

contourlet method has come up as a better method which produced good results and

the numerical calculation of performance metrics is explained below.

Transform domain can be done by the following metrics.

A. Entropy

of the fused image indicates presence of more information and improvement in fused

image. If L indicates the total number of grey levels and p= {p0, p1 ... pL-1} is the

probability distribution of each level, Entropy is defined as,

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

L=1

E= ∑ Pilog (Pi) (5.1)

i=0

difference divided by the number of elements in the image. If i and j are pixel row

column indices, M and N are the number of rows and columns, MSE is defined by

M N

2

∑ ∑ [ Xij−Yij ] (5.2)

MSE= i=1 j=1

MN

PSNR is the ratio between the maximum possible power of a signal and the power of

corrupting noise that affects the fidelity of its representation.

2 B−1

PSNR=20 log 10 ( ) (5.3)

MSE

existing method:

Table 5.1: Performance metrics for Fusion of CT and MRI image size of skull

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Figure 5.12: Comparison of PSNR, MSE and Entropy in terms of bar charts for CT

and MRI of skull

Table 5.2: Performance metrics for Fusion of CT and MRI image size having tumour

in brain

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Chart 5.13: Comparison of PSNR, MSE and Entropy in terms of bar charts for CT and

MRI for brain having tumour

Table 5.3: Performance metrics for Fusion of MRI and PET image size for FTD

(NEURON DE-GENERATION)

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Figure 5.14: Comparison of PSNR, MSE and Entropy in terms of bar charts for MRI

and PET for FTD (NEURON DE-GENERATION)

Table 5.4: Performance metrics for Fusion of CT and MRI image size for head

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Figure 5.15: Comparison of PSNR, MSE and Entropy in terms of bar charts for CT

and MRI of Head

Table 5.5: Performance metrics for Fusion of MRI_T1 AND MRI_T2 image size [256

256] for brain (cholesterol)

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

Figure 5.16: Comparison of PSNR, MSE and Entropy in terms of bar charts for

MRI_T1 AND MRI_T2 for brain (cholesterol)

These bar charts and Tabular values give the experimental values performed on

different images we discussed previously. From these we can observe that curvelet-

contourlet transform has given better results for which PSNR and Entropy values are

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

of high values when compared to other transformation techniques and the MSE is less

when compared with other methods, which satisfied the conditions of better image

quality that will obtain after fusion.

CHAPTER 6

CONCLUSION

In this project, a hybrid technique for image fusion using the combinations of

wavelet, curvelet, contourlet is being simulated. The simulated results for different

hybrid combinations of above mentioned transforms are tested and compared for

Implement of hybrid image fusion technique for feature enhancement in medical diagnosis

various medical image combinations like MRI, and also for various input image sizes.

In all cases , curvelet-contourlet based hybrid technique is observed to be outsmarting,

which provide best quality fused image than other two combinations in terms PSNR ,

MSE and Entropy. Here Curvelet-Contourlet based hybrid technique suites best for

medical diagnosis

Future scope

In my project we have applied fusion on non noise images and this can be

further modified so that we can fuse and extract the features of noisy images. Also we

have performed fusion of only two images which can be further extended in fusing

multiple images.

REFERENCES

[1] Jyoti Agarwal1, Sarabjeet Singh Bedi, ”Implementation of hybrid image fusion

technique for feature enhancement in medical diagnosis”, Springer, Human-centric

Computing and Information Sciences, page 1-17, (2015) 5:3.

[2] S. Yang, M. Wang, L. Jiao, R. Wu, and Z. Wang, “Image fusion based on a new

contourlet packet”, Inf. Fusion, vol. 11, no. 2, pp. 78–84, 2010.

[3] J. Nunez, X. Otazu, O. Fors, A. Prades, V. Pala and R. Arbiol, “Multiresolution-based

image fusion with additive wavelet decomposition”, IEEE Transactions on Geo-

science and Remote sensing, vol. 37, no. 3, 1999, pp. 1204-1211.

[4] Sweta Mehta, Bijith Mara, “CT and MRI image fusion using curvelet transform,”

ISSN: 0975 – 6779, nov 12 to oct 13, volume – 02, issue – 02, page 848-852.

[5] Navneet kaur, Madhu Bahl, Harsimran Kaur, “Review On: Image Fusion Using

Wavelet and Curvelet Transform” IJCSIT, Vol. 5 (2), page 2467-2470, 2014.

[6] Deron Ray Rodrigues, “Curvelet Based Image fusion techniques for Medical Images”,

IJRASET Volume 3, Issue 3, pp 902-908, March 2015.

[7] R. J. Sapkal, S. M. Kulkarni, “Image Fusion based on Wavelet Transform for Medical

Application”, IJERA, Vol. 2, Issue 5, September- October 2012, pp.624-627.

[8] S. Ibrahim and M. Wirth, “Visible and IR Data Fusion Technique Using the

Contourlet Transform”, International conference on computational science and

engineering, CSE 09, IEEE, vol. 2, pp. 42-47, 2009.

[9] M. N. Do and M. Vetterli, “The contourlet transform: An efficient directional Multi-

resolution image representation”, IEEE Transactions on Image Processing, vol. 14,

No.12, pp. 2091–2106, 2005.

[10] Miao Qiguang, Wang Baoshul, “A Novel Image Fusion Method Using Contourlet

Transform”, IEEE trans. geosci. remote sens., vol. 43, no. 6, pp. 1391-1402, June

2005.

[11] Huang, Junfeng Gao, Zhongsheng Qian “Multi-focus Image Fusion Using an

Effective Discrete Wavelet Transform Based algorithm measurement”, SCIENCE

REVIEW, Volume 14, No. 2, 2014,102-108.

## Molto più che documenti.

Scopri tutto ciò che Scribd ha da offrire, inclusi libri e audiolibri dei maggiori editori.

Annulla in qualsiasi momento.