Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
TECHNOLOGY, AGARTALA
1
Brain Tumor Detection
from MRI
(USING WAVELET TRANSFORM)
GROUP MEMBERS:
ABHISHEK MAURYA (15UEC052)
DIPANKAR DAS (15UEC033)
RESHMI PODDER (15UEC029)
SUBHANKAR DHAR (15UEC023)
LONA CHAKMA (15UEC040)
2
DEDICATION
To our family and friends for providing all sorts of support to us over
the years. To our respected Professor and teaching assistants for
inspiring, encouraging and giving the strength of knowledge to excel in
our life.
3
REPORT APPROVAL SHEET
The project report entitled “BRAIN TUMOR DETECTION FROM
MRI IMAGES USING MATLAB” by Abhishek Maurya 15UEC052,
Reshmi Podder 15UEC029, Lona Chakma15UEC040, Dipankar Das
15UEC034, and Subhankar Dhar 15UEC023 is approved for the degree
of Bachelor of Technology in Electronics and Communication
Engineering.
4
ABBREVIATION
5
CERTIFICATE
It is certified that the project work entitled ‘BRAIN TUMOR
DETECTION FROM MRI IMAGES USING MATLAB’ is carried out
by Abhishek Maurya 15UEC052, Reshmi Podder 15UEC029, Lona
Chakma 15UEC040, Dipankar Das 15UEC034, and Subhankar Dhar
15UEC023 in partial fulfillment for the award of the degree of Bachelor
of Technology in Electronics and Communication Engineering from
National Institute of Agartala during the year 2015-19. The project
report has been carried out by our supervision and that this work has
not been submitted anywhere else for degree.
Signature of Supervisor
NIT Agartala
April, 2019
6
DECLARATION
We declare that this written submission represents our ideas in our own
words and where other ideas or words have been included. We have
adequately cited and referenced the original sources. We also declare
that we have adhered to all the principals’ academic honesty and
integrity and have not misinterpreted or fabricated or falsified any idea /
data /source in our submissions. We have understand that any violation
of the above will be cause for the displinary action by the institute and
also can evoke panel action from the sourced which have thus not been
properly cited or from whom proper permission has not been taken
when needed.
Abhishek Maurya
Dipankar Das
Subhankar Dhar
Reshmi Podder
Lona Chakma
7
ACKNOWLEDGEMENT
Standing at this point of time at the end of final year, we have realized
the contributions of our respected faculties which they have given to
train us towards being a successful engineer. As per the rules of BTech,
we had done our final year project under Mr. Kamalesh Debnath,
Assistant Professor, and NIT Agartala. He has continuously mentored
us in our project.
Obvious of the myriad ways that each of the group members’ along
with sir has contributed a great effort towards the project, ultimately we
were successful in completion of the project. We heart fully want to
convey thanks to sir for his immense efforts and guidance. At the same
time we want to thank the college authorities for making all the
necessary equipments available to us for the successful completion of
the project. Our special thanks to our respected Electronics and
Communication Engineering Head of the Dept. sir Dr. Atanu
Choudhury.
Lona Chakma
Dipankar Das
Subhankar Dhar
8
ABSTRACT
9
CONTENT
Table of Contents
CERTIFICATE ............................................................................................ 6
ACKNOWLEDGEMENT............................................................................ 8
ABSTRACT ................................................................................................ 9
CHAPTER 1 - PREPROCESSING ........................................................... 14
1.1 FEATURE EXTRACTION .............................................................. 15
1.2 DISCRETE WAVELET TRANSFORM ................................................ 16
1.3 2D DWT……………………………………………………………………………..…………… 15
2.1 MOTIVATION………………………………………………………………………………….18
CHAPTER 4- EXPERIMENTS………………………………………………………..…24
4.1 DATABASE……………………………………………………………………………………….25
10
FIGURES
1. Methodology of our proposed algorithm.
11
INTRODUCTION
Magnetic resonance imaging (MRI) is an imaging technique that
produces high quality images of the anatomical structures of the human
body, especially in the brain, and provides rich information for clinical
diagnosis and biomedical research [1{5]. The diagnostic values of MRI
are greatly magnified by the automated and accurate classification of
the MRI images [6{8].Wavelet transform is an effective tool for feature
extraction from MR brain images, because it allows analysis of images
at various levels of resolution due to its multi-resolution analytic
property. However, this technique requires large storage and is
computationally expensive [9]. In order to reduce the feature vector
dimensions and increase the discriminative power, the principal
component analysis (PCA) was used [10]. PCA is appealing since it
effectively reduces the dimensionality of the data and therefore reduces
the computational cost of analysing new data [11]. Then, the problem
of how to classify on the input data arises.
In recent years, researchers have proposed a lot of approaches for
This goal, which fall into two categories. One category is supervised
Classification, including support vector machine (SVM) [12] and k-
Nearest Neighbours (k-NN) [13]. The other category is unsupervised
Classification [14], including self-organization feature map (SOFM)
[12] and fuzzy c-means [15]. While all these methods achieved good
results, and yet the supervised classifier performs better than
unsupervised classifier in terms of classification accuracy (success
classification rate).However, the classification accuracies of most
existing methods were lower than 95%, so the goal of this project is to
found a more accurate method. Among supervised classification
methods, the SVMs are state-of-the-art classification methods based on
machine learning theory [1618]. Compared with other methods such as
artificial neural network, decision tree, and Bayesian network, SVMs
have significant advantages of high accuracy, elegant mathematical
tractability, and direct geometric interpretation. Besides, it does not
need a large number of training samples to avoid over fitting
[19].Original SVMs are linear classifiers. In this paper, we introduced
the kernel SVMs (KSVMs), which extends original linear SVMs to
nonlinear SVM classifiers by applying the kernel function to replace the
dot product form in the original SVMs [20]. The KSVMs allow us to fit
the maximum-margin hyper plane in a transformed feature space. The
transformation may be nonlinear and the transformed space high
12
dimensional; thus, though the classifier is a hyper plane in the high-
The structure of the rest of this paper is organized as follows. Next
Section 2 gives the detailed procedures of pre-processing, including the
discrete wavelet transform (DWT) and principle component analysis
(PCA). Section 3 first introduces the motivation and principles of linear
SVM, and then turns to the kernel SVM. Section 4 introduces the K-
fold cross validation, protecting the classifier from over fitting.
Experiments in Section 5 use totally 160 images as the dataset, showing
the results of feature extraction and reduction. Afterwards, we compare
our method with different kernels to the latest methods in the decade.
13
CHAPTER 1 - PREPROCESSING
In total, our method consists of three stages:
Step 1. Pre-processing (including feature extraction and feature
Reduction);
Step 2. Training the kernel SVM;
Step 3. Submit new MRI brains to the trained kernel SVM, and
Output the prediction.
As shown in Fig. 1, this flowchart is a canonical and standard
Classification method which has already been proven as the best
Classification method [22]. We will explain the detailed procedures of
the pre-processing in the following subsections.
14
1.1 FEATURE EXTRACTION
15
1.2 DISCRETE WAVELET TRANSFORM
16
Here, the wavelet Ãa; b (t) is calculated from the mother wavelet à (t)
by translation and dilation: a is the dilation factor and b the translation
parameter (both real positive numbers). There are several different
kinds of wavelets which have gained popularity throughout the
development of wavelet analysis. The most important wavelet is the
Harr wavelet, which is the simplest one and often the preferred wavelet
in a lot of applications [25{27]. Equation (1) can be discredited by
restraining a and b to a discrete lattice (a = 2b &a >0) to give the
DWT, which can be expressed as follows.
17
1.3 2D DWT
In case of 2D images, the DWT is applied to each dimension separately.
Fig.4 illustrates the schematic diagram of 2D DWT.As a result; there are 4
sub-band (LL, LH, HH, and HL) images at each scale. The sub-band
LL is used for next 2D DWT.
18
1.4 FEATURE REDUCTION
Excessive features increase computation time s and storage memory.
Furthermore, they sometimes make classiflcation more complicated,
which is called the curse of dimensionality. It is required to reduce the
number of features.
PCA is an e–client tool to reduce the dimension of a data set
consisting of a large number of interrelated variables while retaining
most of the variations. It is achieved by transforming the data set to a
new set of ordered variables according to their variances or importance.
This technique has three effects: it orthogonalizes the components of
the input vectors so that uncorrelated with each other, it orders the
resulting orthogonal components so that those with the largest
variation come first, and eliminates those components contributing the
least to the variation in the dataset.
It should be noted that the input vectors be normalized to have zero
mean and unity variance before performing PCA. The normalization
is a standard procedure. Details about PCA could be seen in Ref.[10].
19
CHAPTER 2:KERNEL SVM
20
2.1 MOTIVATION
Suppose some prescribed data point search belong to one of two classes, and
the goal is to classify which class a data point will be located in. Here a
data point is view p-dimensional vector, and our task is to create a (p ¡ 1)-
dimensional hyper plane. There are many possible hyper planes that
might classify the data successfully. One reasonable choice as the best
hyper plane is the one that represents the largest separation, or margin,
between the two classes, since we could expect better behavior in response
to unseen data during training, i.e., better generalization performance.
Therefore, we choose the hyper plane so that the distance from It to the
nearest data point on each side is maximized [32].Fig.5 shows the
geometric interpolation of linear SVMs, hereH1, H2, H3 are three hyper
planes which can classify the two classes successfully, however, H2 and H3
does not have the largest margin, so they will not perform well to new
test data. The H1 has the maximum margin to the support vectors (S11 ,
S12 , S13 , S21 , S22 , and S23 ), so it is chosen as the best classiflcation hyper
plane [33].
21
2.2 PRINCIPLES OF LINEAR SVMs
Where 𝜑 denote to the product and W the normal vector to the hyper
plane. We want to choose the W and b to maximize the margin
between the two parallel (asshowninFig.6) hyper planes as large as
22
possible while still separating the data. So we define the two parallel
hyper planes by the equations as
w.x - b= ±1
(5)
23
2.3 KERNEL SVMs
Traditional SMVs construct hyper plane to classify data, so they
cannot deal with classiflcation problem of which the different types of
data located at different sides of a hyper surface; the kernel strategy is
applied to SVMs [37]. There algorithm is formally similar, except
that every dot product is replaced by a non linear kernel function. The
kernel is related to the transform’ 𝜑(x i ) by the equation
k(x i ; x j )= 𝜑’(x i )’(x 𝜑j ). The value w is also in the transformed
space, with w= ∑𝑖𝛼𝑖𝜑𝑖(x i ). Dot products with w for classiflcation
can be computed by w𝜑’(x )= ∑𝑖𝛼𝑖𝛾𝑖k(x i ; x). In another point
of view, the KSVMs allow to fit the maximum-margin hyper plane
in a transformed feature space. The transformation may be nonlinear
and the transformed space higher dimensional; thus though the
classifier is a hyper plane in the higher-dimensional feature space, it
may be non linear in the original input space. Three common kernels
[38] are list in Table1. For each kernel, there should be atleast one
adjusting parameters make the kernel exile and tailor itself to
practical data.
24
CHAPTER 3: K-FOLD STRATIFIED CROSS
VALIDATION
Since the classifier is trained by a given data set, so it may achieve high
classiflcation accuracy only for this training dataset not yet other
independent datasets. To avoid this over fitting, we need to integrate
cross validation in to our method. Cross validation will not increase
the final classiflcation accuracy, but it will make the classifier reliable
and can be generalized to other independent datasets.
25
The K folds can be purely randomly partitioned; however, some
folds may have quite different distributions from other folds. Therefore,
strained K -fold cross validation was employed, where every fold has
nearly the same class distributions [39]. Another challenge is to
determine the number of folds. If K is set too large, the bias of the
true error rate estimator will be small, but the variance of the
estimator will be large and the computation will be time-consuming.
Alternatively, if K is set too small, the computation time will
decrease, the variance of the estimator will be small, but the bias of the
estimator will be large [40]. In this study, we empirically determined
K as 5 through the trial-and-error method, which means that we
suppose parameter K varying from 3 to 10 with increasing and then we
train the SVM by each value. Finally we select the optimal K value
corresponding to the highest classiflcation accuracy.
26
CHAPTER 4: EXPERIMENT
The experiments were carried out on the platform of DELL
INSPIRON15 with1 . 6 GHz processor and 2 GB RAM, running under
Windows1 0 operating system. The algorithm was in-house developed via
the wavelet toolbox, the bio statistic in tool box of Matlab2014b (The
Math works° ) . We downloaded the open SVM toolbox, extended it to
Kernel SVM, and applied it to the MR brain images classiflcation.
functionvarargout = BrainMRI_GUI(varargin)
% BRAINMRI_GUI MATLAB code for BrainMRI_GUI.fig
% BRAINMRI_GUI, by itself, creates a new BRAINMRI_GUI or raises the
existing
% singleton*.
%
% H = BRAINMRI_GUI returns the handle to a new BRAINMRI_GUI or the
handle to
% the existing singleton*.
%
% BRAINMRI_GUI('CALLBACK',hObject,eventData,handles,...) calls the local
% function named CALLBACK in BRAINMRI_GUI.M with the given input
arguments.
%
% BRAINMRI_GUI('Property','Value',...) creates a new BRAINMRI_GUI or
raises the
% existing singleton*. Starting from the left, property value pairs are
% applied to the GUI before BrainMRI_GUI_OpeningFcn gets called. An
% unrecognized property name or invalid value makes property application
27
% stop. All inputs are passed to BrainMRI_GUI_OpeningFcn via varargin.
%
% *See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one
% instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES
ifnargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT
% --- Outputs from this function are returned to the command line.
functionvarargout = BrainMRI_GUI_OutputFcn(hObject, eventdata, handles)
% varargout cell array for returning output args (see VARARGOUT);
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
axes(handles.axes1)
imshow(P);title('Brain MRI Image');
% helpdlg(' Multispectral Image is Selected ');
% set(handles.edit1,'string',Filename);
% set(handles.edit2,'string',Pathname);
handles.ImgData = P;
% handles.FileName = FileName;
guidata(hObject,handles);
end
29
I = handles.ImgData;
gray = rgb2gray(I);
% Otsu Binarization for segmentation
level = graythresh(I);
%gray = gray>80;
img = im2bw(I,.6);
img = bwareaopen(img,80);
img2 = im2bw(I);
% Try morphological operations
%gray = rgb2gray(I);
%tumor = imopen(gray,strel('line',15,0));
axes(handles.axes2)
imshow(img);title('Segmented Image');
%imshow(tumor);title('Segmented Image');
handles.ImgData2 = img2;
guidata(hObject,handles);
signal1 = img2(:,:);
%Feat = getmswpfeat(signal,winsize,wininc,J,'matlab');
%Features = getmswpfeat(signal,winsize,wininc,J,'matlab');
[cA1,cH1,cV1,cD1] = dwt2(signal1,'db4');
[cA2,cH2,cV2,cD2] = dwt2(cA1,'db4');
[cA3,cH3,cV3,cD3] = dwt2(cA2,'db4');
DWT_feat = [cA3,cH3,cV3,cD3];
G = pca(DWT_feat);
whosDWT_feat
whosG
g = graycomatrix(G);
stats = graycoprops(g,'Contrast Correlation Energy Homogeneity');
Contrast = stats.Contrast;
Correlation = stats.Correlation;
Energy = stats.Energy;
Homogeneity = stats.Homogeneity;
Mean = mean2(G);
Standard_Deviation = std2(G);
Entropy = entropy(G);
RMS = mean2(rms(G));
%Skewness = skewness(img)
Variance = mean2(var(double(G)));
a = sum(double(G(:)));
Smoothness = 1-(1/(1+a));
Kurtosis = kurtosis(double(G(:)));
Skewness = skewness(double(G(:)));
30
% Inverse Difference Movement
m = size(G,1);
n = size(G,2);
in_diff = 0;
fori = 1:m
for j = 1:n
temp = G(i,j)./(1+(i-j).^2);
in_diff = in_diff+temp;
end
end
IDM = double(in_diff);
load Trainset.mat
xdata = meas;
group = label;
svmStruct1 = svmtrain(xdata,group,'kernel_function', 'linear');
species = svmclassify(svmStruct1,feat,'showplot',false);
ifstrcmpi(species,'MALIGNANT')
helpdlg(' Malignant Tumor ');
disp(' Malignant Tumor ');
else
helpdlg(' Benign Tumor ');
disp(' Benign Tumor ');
end
set(handles.edit4,'string',species);
% Put the features in GUI
set(handles.edit5,'string',Mean);
set(handles.edit6,'string',Standard_Deviation);
set(handles.edit7,'string',Entropy);
set(handles.edit8,'string',RMS);
set(handles.edit9,'string',Variance);
set(handles.edit10,'string',Smoothness);
set(handles.edit11,'string',Kurtosis);
set(handles.edit12,'string',Skewness);
set(handles.edit13,'string',IDM);
set(handles.edit14,'string',Contrast);
set(handles.edit15,'string',Correlation);
set(handles.edit16,'string',Energy);
set(handles.edit17,'string',Homogeneity);
end
31
% hObject handle to pushbutton3 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
load Trainset.mat
%data = [meas(:,1), meas(:,2)];
Accuracy_Percent= zeros(200,1);
itr = 80;
hWaitBar = waitbar(0,'Evaluating Maximum Accuracy with 100 iterations');
fori = 1:itr
data = meas;
%groups = ismember(label,'BENIGN ');
groups = ismember(label,'MALIGNANT');
[train,test] = crossvalind('HoldOut',groups);
cp = classperf(groups);
%svmStruct =
svmtrain(data(train,:),groups(train),'boxconstraint',Inf,'showplot',false,'kernel_function
','rbf');
svmStruct_RBF =
svmtrain(data(train,:),groups(train),'boxconstraint',Inf,'showplot',false,'kernel_function
','rbf');
classes2 = svmclassify(svmStruct_RBF,data(test,:),'showplot',false);
classperf(cp,classes2,test);
%Accuracy_Classification_RBF = cp.CorrectRate.*100;
Accuracy_Percent(i) = cp.CorrectRate.*100;
sprintf('Accuracy of RBF Kernel is: %g%%',Accuracy_Percent(i))
waitbar(i/itr);
end
delete(hWaitBar);
Max_Accuracy = max(Accuracy_Percent);
sprintf('Accuracy of RBF kernel is: %g%%',Max_Accuracy)
set(handles.edit1,'string',Max_Accuracy);
guidata(hObject,handles);
32
[train,test] = crossvalind('HoldOut',groups);
cp = classperf(groups);
svmStruct =
svmtrain(data(train,:),groups(train),'showplot',false,'kernel_function','linear');
classes = svmclassify(svmStruct,data(test,:),'showplot',false);
classperf(cp,classes,test);
%Accuracy_Classification = cp.CorrectRate.*100;
Accuracy_Percent(i) = cp.CorrectRate.*100;
sprintf('Accuracy of Linear Kernel is: %g%%',Accuracy_Percent(i))
waitbar(i/itr);
end
delete(hWaitBar);
Max_Accuracy = max(Accuracy_Percent);
sprintf('Accuracy of Linear kernel is: %g%%',Max_Accuracy)
set(handles.edit2,'string',Max_Accuracy);
33
function edit1_Callback(hObject, eventdata, handles)
% hObject handle to edit1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
34
function edit3_Callback(hObject, eventdata, handles)
% hObject handle to edit3 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
35
ifispc&&isequal(get(hObject,'BackgroundColor'),
get(0,'defaultUicontrolBackgroundColor'))
set(hObject,'BackgroundColor','white');
end
37
ifispc&&isequal(get(hObject,'BackgroundColor'),
get(0,'defaultUicontrolBackgroundColor'))
set(hObject,'BackgroundColor','white');
end
38
ifispc&&isequal(get(hObject,'BackgroundColor'),
get(0,'defaultUicontrolBackgroundColor'))
set(hObject,'BackgroundColor','white');
end
39
ifispc&&isequal(get(hObject,'BackgroundColor'),
get(0,'defaultUicontrolBackgroundColor'))
set(hObject,'BackgroundColor','white');
end
40
% Hint: edit controls usually have a white background on Windows.
% See ISPC and COMPUTER.
ifispc&&isequal(get(hObject,'BackgroundColor'),
get(0,'defaultUicontrolBackgroundColor'))
set(hObject,'BackgroundColor','white');
end
42
groups = ismember(label,'BENIGN ');
groups = ismember(label,'MALIGNANT');
[train,test] = crossvalind('HoldOut',groups);
cp = classperf(groups);
svmStruct4 =
svmtrain(data(train,:),groups(train),'showplot',false,'kernel_function','quadratic');
classes4 = svmclassify(svmStruct4,data(test,:),'showplot',false);
classperf(cp,classes4,test);
%Accuracy_Classification_Quad = cp.CorrectRate.*100;
Accuracy_Percent(i) = cp.CorrectRate.*100;
sprintf('Accuracy of Quadratic Kernel is: %g%%',Accuracy_Percent(i))
waitbar(i/itr);
end
delete(hWaitBar);
Max_Accuracy = max(Accuracy_Percent);
sprintf('Accuracy of Quadratic kernel is: %g%%',Max_Accuracy)
set(handles.edit19,'string',Max_Accuracy);
% Convert to grayscale
gray = rgb2gray(I);
cform = makecform('srgb2lab');
% Apply the colorform
lab_he = applycform(I,cform);
for k = 1:nColors
colors = I;
colors(rgb_label ~= k) = 0;
segmented_images{k} = colors;
end
%
figure, imshow(segmented_images{1});title('Objects in Cluster 1');
44
%figure, imshow(segmented_images{2});title('Objects in Cluster 2');
seg_img = im2bw(segmented_images{1});
figure, imshow(seg_img);title('Segmented Tumor');
%seg_img = img;
% Extract features using DWT
x = double(seg_img);
m = size(seg_img,1);
n = size(seg_img,2);
%signal1 = (rand(m,1));
%winsize = floor(size(x,1));
%winsize = int32(floor(size(x)));
%wininc = int32(10);
%J = int32(floor(log(size(x,1))/log(2)));
%Features = getmswpfeat(signal,winsize,wininc,J,'matlab');
%m = size(img,1);
%signal = rand(m,1);
signal1 = seg_img(:,:);
%Feat = getmswpfeat(signal,winsize,wininc,J,'matlab');
%Features = getmswpfeat(signal,winsize,wininc,J,'matlab');
[cA1,cH1,cV1,cD1] = dwt2(signal1,'db4');
[cA2,cH2,cV2,cD2] = dwt2(cA1,'db4');
[cA3,cH3,cV3,cD3] = dwt2(cA2,'db4');
DWT_feat = [cA3,cH3,cV3,cD3];
G = pca(DWT_feat);
whosDWT_feat
whosG
g = graycomatrix(G);
stats = graycoprops(g,'Contrast Correlation Energy Homogeneity');
Contrast = stats.Contrast;
Correlation = stats.Correlation;
Energy = stats.Energy;
Homogeneity = stats.Homogeneity;
Mean = mean2(G);
Standard_Deviation = std2(G);
Entropy = entropy(G);
RMS = mean2(rms(G));
%Skewness = skewness(img)
Variance = mean2(var(double(G)));
a = sum(double(G(:)));
Smoothness = 1-(1/(1+a));
Kurtosis = kurtosis(double(G(:)));
Skewness = skewness(double(G(:)));
45
% Inverse Difference Movement
m = size(G,1);
n = size(G,2);
in_diff = 0;
fori = 1:m
for j = 1:n
temp = G(i,j)./(1+(i-j).^2);
in_diff = in_diff+temp;
end
end
IDM = double(in_diff);
%feat1 = getmswpfeat(signal1,20,2,2,'matlab');
%signal2 = rand(n,1);
%feat2 = getmswpfeat(signal2,200,6,2,'matlab');
%feat2 = getmswpfeat(signal2,20,2,2,'matlab');
% Combine features
%features = [feat1;feat2];
load Trainset.mat
xdata = meas;
group = label;
%svmStruct = svmtrain(xdata,group,'showplot',false);
% species = svmclassify(svmStruct,feat)
svmStruct1 = svmtrain(xdata,group,'kernel_function', 'linear');
%cp = classperf(group);
46
%feat1 = [0.1889 0.9646 0.4969 0.9588 31.3445 53.4054 3.0882 6.0023 1.2971e+03
1.0000 4.3694 1.5752 255];
% feat2 = [ 0.2790 0.9792 0.4229 0.9764 64.4934 88.6850 3.6704 8.4548 2.3192e+03
1.0000 1.8148 0.7854 255];
species = svmclassify(svmStruct1,feat,'showplot',false)
%classperf(cp,species,feat2);
%classperf(cp,feat2);
% Accuracy = cp.CorrectRate;
% Accuracy = Accuracy*100
% Polynomial Kernel
% svmStruct2 = svmtrain(xdata,group,'Polyorder',2,'Kernel_Function','polynomial');
%species_Poly = svmclassify(svmStruct2,feat,'showplot',false)
% Quadratic Kernel
%svmStruct3 = svmtrain(xdata,group,'Kernel_Function','quadratic');
%species_Quad = svmclassify(svmStruct3,feat,'showplot',false)
% RBF Kernel
%svmStruct4 = svmtrain(xdata,group,'RBF_Sigma',
3,'Kernel_Function','rbf','boxconstraint',Inf);
%species_RBF = svmclassify(svmStruct4,feat,'showplot',false)
% To plot classification graphs, SVM can take only two dimensional data
data1 = [meas(:,1), meas(:,2)];
newfeat = [feat(:,1),feat(:,2)];
pause
%close all
%%
% Multiple runs for accuracy highest is 90%
load Trainset.mat
%data = [meas(:,1), meas(:,2)];
data = meas;
groups = ismember(label,'BENIGN ');
groups = ismember(label,'MALIGNANT');
[train,test] = crossvalind('HoldOut',groups);
cp = classperf(groups);
%svmStruct =
svmtrain(data(train,:),groups(train),'boxconstraint',Inf,'showplot',false,'kernel_function
','rbf');
svmStruct =
svmtrain(data(train,:),groups(train),'showplot',false,'kernel_function','linear');
classes = svmclassify(svmStruct,data(test,:),'showplot',false);
47
classperf(cp,classes,test);
Accuracy_Classification = cp.CorrectRate.*100;
sprintf('Accuracy of Linear kernel is: %g%%',Accuracy_Classification)
%%
48
% data = [xdata(:,1), xdata(:,2)];
% Cross Validation
functionyfit = crossfun(xtrain,ytrain,xtest,rbf_sigma,boxconstraint)
svmStruct =
svmtrain(xtrain,ytrain,'Kernel_Function','rbf','boxconstraint',boxconstraint);
yfit = svmclassify(svmStruct,xtest);
c = cvpartition(200,'kfold',10);
minfn = @(z)crossval();
49
MATLAB CODE FOR CROSS FUNCTION
PARAMETER
[searchminfval] = fminsearch(minfn,randn(2,1),opts);
[searchminfval] = fminsearch(minfn,randn(2,1),opts);
[searchminfval] = fminsearch(minfn,randn(2,1),opts);
z = exp(searchmin)
svmStruct_Latest = svmtrain(cdata,grp,'Kernel_Function','rbf',...
'rbf_sigma',z(1),'boxconstraint',z(2),'showplot',true);
50
OUTPUTS
51
Fig O2:GUI showing segmented image of MALIGNANT type
Brain Tumor.
OBSERVATION
4.1 DATABASE
53
54
4.2 FEAUTURE EXTRACTION
The three levels of wavelet decomposition greatly reduce the input
image size as shown in Fig.9. The top left corner of the wavelet
coefficients image denotes the approximation coefficients of level-3,
whose size is only 32× 32= 1024.
55
4.3 FEAUTURE EXTRACTION
As stated above, the number of extracted features was reduced from
65536 to 1024. However, it is still too large for calculation. Thus,
PCA is used to further reduce the dimensions of features to a higher
degree. The curve of cumulative sum of variance versus the number of
principle components is shown in Fig.10.
The variances versus the number of principle components from
1to20 are listed in Table3. It shows that only 19 principle components
(boldfontintable), which are only 1.86% of the original features, could
preserve 95.4% of total variance.
56
57
58
4.4 CLASSIFICATION ACCURACY
We tested four SVMs with different kernels (LIN, HPOL, IPOL, and
GRB). In the case of using linear kernel, the KSVM degrades to
original linear SVM.
We computed hundreds of simulations in order to estimate the
optimal parameters of the kernel functions, such as the ordered in
HPOL and IPOL kernel, and the scaling factor in GRB kernel. The
confusion matrices so four methods are listed in Table4. The element of
I throw and j th column represents the classiflcation accuracy
belonging to class I areas signed to class j after the supervised
classiflcation.
The results showed that the proposed DWT+PCA+KSVM
method obtains quite excellent results on both training and validation
images. For Linear kernel, the whole classiflcation accuracy was
(17+ 135)=160= 95%; for HPOL kernel, was (19+ 136) =160= 96:88%;
for IPOL kernel, was (18+ 139)=160= 98:12%; and for the GRB
kernel, was (20+ 139)=160= 99:38%. Obviously, the GRB kernel SVM
outperformed the other three kernel SVMs.
Moreover, we compared our method with six popular methods
(DWT+SOM[12] ,DWT+SVM with linear kernel[12]
,DWT+SVM with RBF based kernel[12], DWT+PCA+ANN[41],
DWT+PCA+kNN [41], and DWT+PCA+ACPSO+FNN[25])
described in the recent literature using the same MRI datasets and same
number of images. The comparison results were shown in Table5. It
indicates that our proposed method DWT+PCA+KSVM with GRB
kernel performed best among the 10 methods, achieving the best
classiflcation accuracy as 99.38%. Then exits DWT + PCA +
ACPSO+ FNN method [25] with 98.75% classiflcation accuracy.
The third is our proposed DWT+PCA+KSVM with
IPOL kernel with 98.12% classiflcation accuracy.
59
60
4.5 TIME ANALYSIS
Computation time is another important factor to evaluate the
classifier. The time for SVM training was not considered, since the
parameters of the SVM keep unchanged after training. We sent all the
160 images into the classifier, recorded corresponding computation
time, computed the average value, depicted consumed time of different
stages shown in Fig.11.
For each 256× 256 image, the averaged computation time on feature
extraction, feature reduction, and SVM classiflcation is 0.023 s,
0.0187 s, and 0.0031 s, respectively. The feature extraction stage is
the most time-consuming as 0.023 s. The feature reduction costs
0.0187 s. The SVM classiflcation costs the least time only 0.0031 s.
The total computation time for each 256 × 256 size image is about
0.0448 s, which is rapid enough for is all-time diagnosis.
61
CHAPTER 5: CONCLUSIONS AND DISCUSSIONS
62
The proposed DWT+PCA+KSVM with GRB kernel method shows
superiority to the LIN, HPOL, and IPOL kernels SVMs. The reason is
the GRB kernel takes the form of exponential function, which can
enlarge the distance between samples to the extent that HPOL can’t
reach. Therefore, we will apply the GRB kernel to other industrial
fields.
63
REFERENCES
1. Zhang, Y., L. Wu, and S. Wang, “Magnetic resonance brain
image classification by an improved artificial bee colony
algorithm,” Progress in Electromagnetics Research, Vol. 116,
65-79, 2011.
2. Mohsin, S.A., N. M. Sheikh, and U. Saeed, “ MRI induced
heating of deep brain stimulation leads: Effect of the air-tissue
interface,” Progress in Electromagnetics Research, Vol. 83, 81-
91,2008.
3. Golestanirad, L., A. P. Izquierdo, S. J. Graham, J. R. Mosig, and
C. Pollo, “Effect of realistic modeling of deep brain stimulation
on the prediction of volume of activated tissue,” Progress in
Electromagnetics Research, Vol. 126, 1-16, 2012.
4. Mohsin, S. A., “Concentration of the specific absorption rate
around deep brain stimulation electrodes during MRI,” Progress
in Electromagnetics Research, Vol. 121, 469-484, 2011.
5. Asimakis, N. P., I. S. Karanasiou, P. K. Gkonis, and N. K.
Uzunoglu, “ Theoretical analysis of a passive acoustic brain
monitoring system,” Progress in Electromagnetics Research B,
Vol. 23, 165-180, 2010.
64
Future scope
1. As medical image segmentation plays a very important role in. The
field of image guided surgeries.
65
Thank you