Sei sulla pagina 1di 39

SISTec/BE/CS/2019/7/Project 1/01

CHAPTER-1
INTRODUCTION

1.1 INTRODUCTION
A brain tumor is one of the major causes of death among other types of cancers. Proper and
timely diagnosis can prevent the life of a person to some extent. We used Magnetic resonance
(MR) imaging for this project. Magnetic resonance (MR) imaging is widely used by physicians
in order to determine the existence of tumors or the specification of the tumors. The
qualification of brain cancer treatment depends on the physician’s experience and knowledge.
Therefore we have proposed an automated reliable system for the diagnosis of the brain tumor.
The proposed system is a system for brain tumor diagnosis. First, noise removal is performed
as the pre-processing step on the brain MR images. Texture features are extracted from these
noise-free brain MR images. The next phase of the proposed system is Self-organizing
Mapping based feature training and followed by the CNN based classification that is based on
these extracted features. More than 91% of accuracy is achieved by the classification phase.
The results of the proposed technique show that tumor images are recognized quite accurately.
This technique has been tested against the datasets of different patients received from a medical
organization. There are different stages: pre-processing, segmentation, feature extraction &
classification using a Convolutional neural network. A brief description of each step is given in
further discussion. In GUI, the user selects an image or file with DICOM or JPG Extention and
upload it into the database. In the following, when the user clicks on the Detect button then a
pop-up shows the classification with the uploaded image. For this GUI, we used the Django
framework and HTML/CSS.

1.2 PROJECT OBJECTIVE


A brain tumor is an unrestrained group of tissue that may be implanted in the regions of the
brain that makes the responsive functioning of the body to be disabled. An imaging technique
plays a central role in the diagnosis and treatment of a brain tumor. Imaging of the tumors can

1
SISTec/BE/CS/2019/7/Project 1/01

be done in many ways such as Computed Tomography (CT) scan, Ultrasound and magnetic
resonance image (MRI). Due to its non- invasive and soft tissues with high-resolution MRI
(Magnetic Resonance, MR) image has become an important diagnosis of brain tumors Tool.
MRI image for a brain includes a large amount of spatial information on brain structure and it
can be utilized to medical diagnostics. Brain tumors are considered as one of the most deadly
and difficult to identify and be treated forms of cancer. With the development of almost two
decades, the pioneering approaches applying computer-aided techniques for segmenting brain
tumors are becoming more and more mature and coming closer to routine clinical applications.

Fig-1.1: Classification for Tumor and Non-Tumor Brain

In this work, efficient automatic brain tumor detection is performed by using the convolution
neural network. Simulation is performed by using python language. The accuracy is calculated
and compared with all other states of art methods. The training accuracy, validation accuracy,
and validation loss are calculated to find the efficiency of the proposed brain tumor
classification scheme. In the proposed, CNN based classification doesn’t require feature
extraction steps separately. The feature value is taken from CNN itself. In fig.1.1. shows the

2
SISTec/BE/CS/2019/7/Project 1/01

classified result of the Tumor and Non-tumor brain image. Hence the complexity and
computation time is low and accuracy is high. The classification results will be a Tumor brain
or non-tumor brain based on the probability score value. The normal brain image has the lowest
probability score. Tumor brain has the highest probability score value, when compared to
normal and tumor brain. We show the above classification on using HTML, CSS,
JAVASCRIPT and DJANGO Framework.

3
SISTec/BE/CS/2019/7/Project 1/01

CHAPTER-2
SOFTWARE AND HARDWARE REQUIREMENTS

A fully functional website consisting of image processing and deep learning model. In this
website, the user uploads an image of the human brain in image formats like DICOM, JPG,
PNG, etc. then image preprocessing and deep learning classification take place and further
classification output with image will show in the pop-up.

2.1 SOFTWARE REQUIREMENTS


The Front End is highly dynamic and developed keeping in mind the minimalistic UI/UX
design principles, for easy and interactive use of the application. In Back End, Image
preprocessing and deep learning classification are processed and return output with a given
image.

2.1.1 TENSORFLOW DEEP LEARNING FRAMEWORK


2.1.1.1 What is Tensorflow?

TensorFlow is a ​free and ​open-source ​software library for ​dataflow and


differentiable programming across a range of tasks. It is a symbolic math library
and is also used for ​machine learning applications such as ​neural networks​. It is
used for both research and production at ​Google​.​TensorFlow was developed by
the ​Google Brain​ team for internal Google use.

2.1.1.2 Why Tensorflow?


TensorFlow can train and run deep neural networks for handwritten digit
classification, image recognition, word embeddings, recurrent neural networks,
sequence-to-sequence models for machine translation, natural language
processing, and PDE (partial differential equation) based simulations. Best of

4
SISTec/BE/CS/2019/7/Project 1/01

all, TensorFlow supports production prediction at scale, with the same models
used for training.

2.1.1.3 How Tensorflow works?


​ structures that
TensorFlow allows developers to create ​dataflow graphs—
describe how data moves through a ​graph​, or a series of processing nodes. Each
node in the graph represents a mathematical operation, and each connection or
edge between nodes is a multidimensional data array or ​tensor.​

TensorFlow provides all of this for the programmer by way of the Python
language. Python is easy to learn and works with and provides convenient ways
to express how high-level abstractions can be coupled together. Nodes and
tensors in TensorFlow are Python objects, and TensorFlow applications are
themselves Python applications.

2.1.2 OPENCV COMPUTER VISION LIBRARY


2.1.2.1 What is OpenCV?

OpenCV (Open Source Computer Vision Library) is an open-source computer


vision and machine learning software library. OpenCV was built to provide a
common infrastructure for computer vision applications and to accelerate the use
of machine perception in commercial products. Being a BSD-licensed product,
OpenCV makes it easy for businesses to utilize and modify the code.

2.1.2.2 Why OpenCV?

OpenCV was designed for computational efficiency and with a strong focus
on real-time applications. Written in optimized C/C++, the library can take
advantage of multi-core processing by using ​Intel © Threading Building
Blocks (TBB).

5
SISTec/BE/CS/2019/7/Project 1/01

2.1.3 PYDICOM LIBRARY


2.1.3.1 What is PYDICOM LIBRARY?
Pydicom is a pure Python package for working with ​DICOM files such as
medical images, reports, and radiotherapy objects. Pydicom makes it easy to
read these complex files into natural pythonic structures for easy manipulation.
Modified datasets can be written again to DICOM format files.

2.1.4 DJANGO FRAMEWORK (PYTHON)


2.1.4.1 What is Django Framework?
Django is a high-level Python Web framework that encourages rapid
development and clean, pragmatic design. Built by experienced developers, it
takes care of much of the hassle of Web development, so you can focus on
writing your app without needing to reinvent the wheel. It’s free and open
source.

2.1.5 GOOGLE COLAB


2.1.5.1 What is Google Colab?
Google Colab is a free cloud service and now it supports free GPU! You can;
improve your Python programming language coding skills. develop deep
learning applications using popular libraries such as Keras, TensorFlow,
PyTorch, and OpenCV. The most important feature that distinguishes Colab
from other free cloud services is; Colab provides GPU and is totally free.

2.1.6 JUPYTER NOTEBOOK


2.1.6.1 What is Jupyter Notebook?
In this case, "notebook" or "notebook documents" denote documents that
contain both code and rich text elements, such as figures, links, equations, ...
Because of the mix of code and text elements, these documents are the ideal
place to bring together an analysis description, and its results, as well as, they

6
SISTec/BE/CS/2019/7/Project 1/01

can be executed perform the data analysis in real-time. The Jupyter Notebook
App produces these documents.

2.1.7 PYCHARM IDE (PYTHON)


2.1.7.1 What is Pycharm?
PyCharm is one of the most widely used IDEs for Python programming
language. At present, the Python IDE is being used by large enterprises like
Twitter, Pinterest, HP, Symantec, and Groupon. JetBrains has developed
PyCharm as a cross-platform IDE for Python. In addition to supporting versions
2.x and 3.x of Python, PyCharm is also compatible with Windows, Linux, and
macOS. At the same time, the tools and features provided by PyCharm help
programmers to write a variety of software applications in Python quickly and
efficiently. The developers can even customize the PyCharm UI according to
their specific needs and preferences. Also, they can extend the IDE by choosing
from over 50 plug-ins to meet complex project requirements.

2.2 HARDWARE REQUIREMENTS


2.2.1 FOR LOCAL MACHINE
● Processor: 7th Gen Intel Core i3-7100U processor, 2.4GHz base processor
speed, 2 cores, 3MB cache
● Operating System: Pre-loaded Windows 10 Home with lifetime validity
● Display: 15.6-inch Full HD (1920x1080) WLED display, Display Features:
Diagonal FHD SVA Anti-Glare WLED-backlit Display
● Memory & Storage: 4GB DDR4 RAM Intel HD Graphics 620 | Storage: 1TB
HDD, HDD Speed(RPM): 5400 RPM
● Design & battery: Multi-touch gesture support | Thin and light design |
Laptop weight: 2.2 kg | Average battery life = 7 hours, HP Fast Charge
battery, Battery: 3 Cell, Li-Ion, Power Supply: 41 W AC Adapter W

7
SISTec/BE/CS/2019/7/Project 1/01

2.2.2 FOR GOOGLE COLAB


● GPU: 1xTesla K80, compute 3.7, having 2496 CUDA cores, 12GB
GDDR5 VRAM
● CPU: 1xsingle core hyperthreaded Xeon Processors @2.3Ghz i.e(1 core,
2 threads)
● RAM: ~12.6 GB Available
● Disk: ~33 GB Available

8
SISTec/BE/CS/2019/7/Project 1/01

CHAPTER-3
PROBLEM STATEMENT

Brain cancer is one of the dangerous diseases in the world. So early recognition of cancer is key
to its cure. As the human brain is a very complex structure analysis of tumors in this region is
difficult to process. Medical images have different textures depending on the area of body
considered classification of images becomes a challenging problem. The existing system uses
different algorithms like k-means, fuzzy c-means for segmentation which has more
disadvantages like it is slow, it expects the user to specify cluster number, heavily dependent on
initial cluster centers. A very difficult problem in classification is the choice of features to
distinguish between classes. So this current procedure is extremely time-consuming and more
inclined to mistakes. In order to overcome the drawbacks of the existing system, this project
aims to provide an efficient system by using the adaptive k-means clustering method for
segmentation and SVM and CNN for accurate classification of MRI scans into normal and
abnormal based on features. The aim of this study is to propose a model that evaluates the
impact of the deep neural network on the grey scaled segmented images. The correct
classification of the brain prompts the right choice and give great and right treatment. There are
many techniques for brain tumor detection. We have used a Deep Learning (Convolutional
Neural Network) technique for brain tumor detection.

9
SISTec/BE/CS/2019/7/Project 1/01

CHAPTER-4
LITERATURE SURVEY

4.1 MAGNETIC RESONANCE IMAGING (MRI)


Magnetic resonance imaging (MRI) is an imaging technique based on the physical phenomenon
of Nuclear Magnetic Resonance (NMR). It is used in medical settings to produce images of the
inside of the human body. MRI can produce an image of the NMR signal in a thin slice through
the human body. By scanning a set of such slices a volume of a part of the human body can be
represented with MRI.

Fig 4.1 MR Image with Tumor

​4.2 BRAIN MR IMAGES


MRI is an advanced medical imaging technique providing rich information about human soft
tissue anatomy. It has several advantages over other imaging techniques enabling it to provide
3-dimensional data with high contrast between soft tissues. However, the amount of data is far
too much for manual analysis/interpretation, and this has been one of the biggest obstacles in
the effective use of MRI. For this reason, automatic or semi-automatic techniques of
computer-aided image analysis are necessary. Segmentation of MR images into different tissue
classes, especially gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF), is an
important task. Brain MR images have a number of features, especially the following: Firstly,

10
SISTec/BE/CS/2019/7/Project 1/01

they are statistically simple; MR Images are theoretically piecewise constant with a small
number of classes.

The need for an automated and well-organized system of brain tumor MR image classification
and diagnosis has increased with accurate results for the proper directions of treatment (therapy
and surgical planning). For this purpose, many studies have been proposed by different
researchers which have provided good results with accuracy. This chapter will conclude by a
brief discussion of previous work. The researcher focused on getting a higher level of accuracy
in this study and based on two main parts. The first part is feature extraction by using different
methodologies like LTP (Local ternary pattern), Contourlet transform and curvelet transform.
Second and main part is classification which has performed by DNN, a supervised learning
technique. This hybrid method was implemented on the dataset of one thousand MRI images.
Unlikely the other feature extraction methods discussed in this paper experimental result of
DNN with the Contourlet transform technique gives a higher level of accuracy with 97.5 %. In
the minimum time span of 0.088 sec. On the other hand, the curvelet transform technique has
given equal results but computational was 0.15sec which is longer than previous. LTP (Local
ternary pattern) has used fewer time spans of 0.094 sec but the accuracy level is very low to
18.33%. All results have shown the DNN with the Contourlet transform combination the best
technique. A considerable work of the researcher which expresses the experimental results
based on two major parameters like time and accuracy. These parameters are helpful to prove
the proficiency of the algorithm. This study ultimately gives a performance comparison of the
different algorithms like DNN, ANN, and KNN. Experimental results and statistical analysis
illustrate the percentage of accuracy which is 93.18, 90.90 and 81.81 respectively. According to
the results, it is very clear that DNN gave a higher level of percentage as compared to other
remain methodologies KNN and ANN. As a dataset for the experiment MRI images are used in
this study but the significant point is that the fusion technique consists of Gray Level
Concurrence Matrix features and classifier DNN, gave the better result and a higher level of
accuracy. Proposed work in this study [25] represents a segmentation method that is helpful to
facilitate users for quick and efficient tumors recognition of brain MRI. The new method

11
SISTec/BE/CS/2019/7/Project 1/01

introduces asymmetry analysis with further consistent behavior in pathological cases. This
methodology is applied to numerous datasets of different sizes of tumor, location and
intensities and automatic detection and segmentation of different categories of brain tumors
with higher quality. This methodology makes the doctors capable to find tumor in the brain of
the patient and to compute the area of the tumor occupied in brain so that, effective therapy and
treatment can be planned. This goal is achieved successfully by following few steps in
MATLAB coding for image processing. We were also able to segment the different parts of the
brain from the brain CT mages. After area calculation, it was seen that the value of area
computing varies with the diverse slice of brain images

12
SISTec/BE/CS/2019/7/Project 1/01

CHAPTER-5
SOFTWARE REQUIREMENTS SPECIFICATION

This study of brain MR Images is helpful in the brain tumor diagnosis process. Tumor and
cancer is a harmful and death-defying disease for human life. This study is another effort to
reveal the importance of the image classification in the world of the Biocomputing field. The
image classification technique is efficiently improving the process of disease diagnosis. It is a
process in which images are labeled into numerous predefined classes. Several techniques have
been introduced for image classification like CNN, SVM, Boltzmann, fuzzy C-mean, random
forest and many others. This study proposed a model in which a deep neural network technique
is used with a grey scaled segmentation technique. A combination of these two techniques is
giving better results in minimum computational time. There are different stages:
pre-processing, segmentation, feature extraction & classification using a Convolutional neural
network. A brief description of each step is given in further discussion. In GUI, the user selects
an image or file with DICOM or JPG Extention and upload it into the database. In the
following, when the user clicks on the Detect button then a pop-up shows the classification
with the uploaded image. For this GUI, we used the Django framework and HTML/CSS.

5.1 USER
Radiologists, Neurologists, Doctors, and even patients play the role of client or user. This study
provides help to radiologists, doctors, and surgeons in diagnoses of the disease in a very short
time and with high accuracy. This study will contribute effectively to the field of image
processing.

5.2 FUNCTIONAL REQUIREMENT


● In a single webpage website, the user will upload the MR Image of Brain for Tumor
Classification with extensions like DICOM, JPG, PNG, etc..

13
SISTec/BE/CS/2019/7/Project 1/01

● After the upload process of the MR image, Image filtration and de-noising are the first
preprocessing step dealing with image processing. In image, de-noising is processed
using certain restoration techniques to remove induced noise which may creep in the
image during acquisition, transmission or compression process.

● After the preprocessing of the MR image, tumor classification will start on the backend.

● Display classification of model i.e. Tumor or Non-Tumor on the website with the given
image on the popup.

5.3 NON-FUNCTIONAL REQUIREMENT


● User-friendly UI/UX.

● Easy for Radiologists, Neurologists, etc.

● Fast Response

● Less Complex

● Automated

5.4 GOALS AND SCOPES


In this study, a review of the previous work of the last ten years is discussed for comparison
purposes. CNN technique is used for classification on the grey scaled MR Images to get
accurate results for treatment planning and improvement. This study provides help to
radiologists, doctors, and surgeons in diagnoses of the disease in a very short time and with
high accuracy. This study will contribute effectively to the field of image processing.

14
SISTec/BE/CS/2019/7/Project 1/01

CHAPTER-6
SOFTWARE DESIGN

6.1 USE CASE DIAGRAM

Fig 6.1 Use-Case Diagram


A model has been proposed for the efficient tumor detection of brain MR images. The
following steps are adopted for detection.

Step 1: taking an input image.

Step 2: filter image.

15
SISTec/BE/CS/2019/7/Project 1/01

Step 3: and then apply the classification technique of deep neural networks to detect the tumor
from brain MR images. Accuracy of the classification is 90% which is calculated by using the
formula.

Step 4: the last task is to compute the area of the detected image by using an algorithm.

6.2 WORKING
As it is indicated in the title, this chapter includes the working of the study. Further details of
this section provide the outlines of the research strategy, the research method, the research
approach, the source of datasets, the research process, type of data analysis, the ethical
considerations and the research limitations of the study.

6.2.1 RESEARCH STRATEGY

The given flow diagram will demonstrate the plan and actions to achieve the objective of the
study and to reach at its conclusion point

FIGURE 6.2 Representation of research plan and actions

16
SISTec/BE/CS/2019/7/Project 1/01

6.2.2 Collection of relevant data and analysis

This study is following both descriptive and experimental analysis. Related data will be
collected through books, magazines, research papers, articles, thesis, and the internet. All
collected study material will be arranged for analysis. Discuss the topic and material with the
teacher for guidance and to reach some final point.

6.2.3 Literature review

Collected and arranged material will help to get knowledge about the previous work. This
critical analysis will help in describing the positive and negative points of the previous study
which will help to identify the problem statement.

6.2.4 Identification of the problem

The analysis and discussion will lead to the identification of the problem statement.

6.2.5 Proposed model to solve the problem

The next step will be to find a solution to the identified problem statement. This proposed work
will give better and efficient results.

6.2.6 Implementation of the proposed model

Implementation the proposed model and algorithm by the python toolbox and get the results.

6.2.7 Discussion and conclusion

This step will lead the final brief discussion of the whole study and describe the
recommendations of it. Discuss the efficiency and betterment of the proposed work Future
work will also be mentioned for the next researchers.

6.2.8 Research methods

This study will be literature-based. It means that methodology of analysis which includes
selection and discussion of theoretical material and descriptive material in the context of a

17
SISTec/BE/CS/2019/7/Project 1/01

detailed comparison of theories, finding issues and try to resolve by the proposing solution
model. The study will be empirically focused.

6.2.9 Datasets

Internet will be a source of relevant data (research papers, thesis, and books). collected data of
brain tumor classification technique of the previous researchers, on which comparison will be
applied on the basis of different attribute like

• The technique (which is used for image processing)

• Accuracy rate

Dataset of MR Images will obtain via the internet (web brain website). Different types of MR
Images will be available according to requirements.

6.2.10 Data analysis technique

Collected data will be analyzed and compared on the basis of different attributes. A specific
technique or algorithm is used to examine the brain MR images. The technique which will be
used for the classification of MR image in the proposed model is Deep Neural Network
."Normal" neural networks usually have one to two hidden layers that are considerably used for
the supervised prediction or classification. Deep learning of NN (neural network) architectures
is different from "normal" neural networks because of having additional hidden layers as shown
n FIG 6.2 Deep neural network is a computational model having the nature of human brain.
According to the human brain, the DNN is also interconnect processing elements (neurons).
These elements define the task of network. And processing is divided into groups called layers.
DNN contains three layers which are the input layer, an output layer, and a hidden layer. When
images are processed by DNN and give input in the form of the image leads toward the output
in the form of a vector of scores, one for each object class. The class with the highest score
indicates the most likely class of objects in the image. The goal for training the DNN is to
regulate the weights which maximize the scores of the correct class and minimize the scores of

18
SISTec/BE/CS/2019/7/Project 1/01

the incorrect class. During the training of the network, the correct class is considered the gap
between the computed scores and corrected scores is called loose, the goal of the hidden layer
is to minimize the average loss over the large training set.

FIG 6.3 Structure of DNN in layers

Deep learning is not only differing from "normal" NN but support vector machine (which is the
most popular and common algorithm for classification) because they can be trained in an
unsupervised or supervised manner for both unsupervised and supervised learning tasks. Before
the classification, images will be filtered to enhance the quality of the MR image. This filtration
will assist to de-noise and improve the quality of the MR images. Grey scaled segmentation
will be used for further processing of filtered images and make it ready for classification and to
get better results. Classified results will help to compute the Area of detected tumor in the brain
tumor MR Images. For this purpose, the area calculation of the image algorithm will be used.
The classified image will be divided into pixels and calculate the rows and columns according
to the algorithm.

19
SISTec/BE/CS/2019/7/Project 1/01

6.2.11 Convolutional Neural Network (CNN)

Convolutional Neural Network (CNN) issued to achieve some breakthrough results and win
well-known contests. The application of Convolutional layers consists of convolving a signal or
an image with kernels to obtain feature maps. So, a unit in a feature map is connected to the
previous layer through the weights of the kernels. The weights of the kernels are adapted during
the training phase by backpropagation, in order to enhance certain characteristics of the input.
CNN is easier to train and less prone to overfitting. The Convolutional network architecture
and implementation are carried out using TensorFlow. CNN is the continuation of the
multi-layer Perceptron. The network is organized into layers of units in the previous layer. The
essence of CNNs is the convolutions. The main trick with Convolutional networks that avoids
the problem of too many parameters is sparse connections. Every unit is not connected to every
other unit in the previous layer, like in traditional neural networks. The following concepts are
important in the context of CNN.

Fig 6.4: CNN layers arranged in 3-dimensions

Fig 6.5: Max-pooling with a 2*2filter

20
SISTec/BE/CS/2019/7/Project 1/01

6.2.11.1​ ​Initialization

It is important to achieve convergence. We use the Xavier initialization. With


this, the activations and the gradients are maintained in controlled levels,
otherwise, back-propagated gradients could vanish or explode.

6.2.11.2 Activation Function

It is responsible for non-linearly transforming the data. Rectifier linear units


(ReLU), Defined as f(x)=max(0,x), (4) Were found to achieve better results than
the more classical sigmoid, or hyperbolic tangent functions, and speed up
training. However, imposing a constant 0 can impair the gradient flowing and
consequent adjustment of the weights. We cope with these limitations using a
variant called leaky rectified linear unit (LReLU) that introduces a small slope
on the negative part of the function. This function is defined as f(x)
=max(0,x)+αmin(0,x) (5) Where α is the leakiness parameter. In the last FC
layer, we use softmax.

6.2.11.3 Pooling

It combines spatially nearby features in the feature maps. This combination of


possibly redundant features makes the representation more compact and
invariant to small image changes, such as insignificant details; it also decreases
the computational load of the next stages. To join features it is more common to
use max-pooling or average pooling. ISSN (Print): 2320 – 3765 ISSN (Online):
2278 – 8875 International Journal of Advanced Research in Electrical,
Electronics and Instrumentation Engineering (An ISO 3297: 2007 Certified
Organization) Website: www.ijareeie.com Vol. 6, Issue 5, May 2017 Copyright
to IJAREEIE DOI:10.15662/IJAREEIE.2017.0605082 3664

21
SISTec/BE/CS/2019/7/Project 1/01

6.2.11.4 Regularization

It is used to reduce over-fitting. We use Drop out in the FC layers. In each


training step, it removes nodes from the network with probability p. In this way,
it forces all nodes of the FC layers to learn better representations of the data,
preventing nodes from co-adapting to each other. At test time, all nodes are
used. Dropout can be seen as an ensemble of different networks and a form of
bagging since each network is trained with a portion of the training data.

6.2.11.5 Data Augmentation

It can be used to increase the size of training sets and reduce overfitting. Since
the class of the patch is obtained by the central voxel, we restricted the data
augmentation to rotating operations. Some authors also consider image
translations, but for segmentation, this could result in attributing a wrong class
to the patch. So, we increased our data set during training by generating new
patches through the rotation of the original patch. In our proposal, we used
angles multiple of 90◦, although another alternative will be evaluated.

6.2.11.6 loss function

It is the function to be minimized during training. We used the Categorical


Cross-entropy, H=−∑j∈ rose 1s∑k∈ c1asses Cj, klog(Cˇj,k) Where Cˇ
represents the probabilistic predictions (after the softmax) and C is the target.
The high-level reasoning in the neural network is done via fully connected
layers.

22
SISTec/BE/CS/2019/7/Project 1/01

CHAPTER-7
OUTPUT SCREENS

Fig-7.1 Home Page

Fig-7.2 Select MR Image which consists of Tumor.

23
SISTec/BE/CS/2019/7/Project 1/01

Fig 7.3 Output Screen says, Tumor detected in uploaded MR image.

Fig 7.4 Now select a tumor-free image

24
SISTec/BE/CS/2019/7/Project 1/01

Fig. 7.5 Output Screen says the uploaded image is tumor-free.

Fig 7.6 Now upload a DICOM Image.

25
SISTec/BE/CS/2019/7/Project 1/01

Fig 7.6 Output Screen says Brain Tumor is detected in uploaded DICOM Image.

26
SISTec/BE/CS/2019/7/Project 1/01

CHAPTER-8
DEPLOYMENT

Until now, your website was only available on your computer. Now you will learn how to
deploy it! Deploying is the process of publishing your application on the Internet so people can
finally go and see your app. :)

As you learned, a website has to be located on a server. There are a lot of server providers
available on the internet, we're going to use ​PythonAnywhere​. PythonAnywhere is free for
small applications that don't have too many visitors so it'll definitely be enough for you now.

The other external service we'll be using is ​GitHub​, which is a code hosting service. There are
others out there, but almost all programmers have a GitHub account these days, and now so
will you!

These three places will be important to you. Your local computer will be the place where you
do development and testing. When you're happy with the changes, you will place a copy of
your program on GitHub. Your website will be on PythonAnywhere and you will update it by
getting a new copy of your code from GitHub.

8.1 GIT

Git is a "version control system" used by a lot of programmers. This software can track changes
to files over time so that you can recall specific versions later. A bit like the "track changes"
feature in word processor programs (e.g., Microsoft Word or LibreOffice Writer), but much
more powerful.

27
SISTec/BE/CS/2019/7/Project 1/01

8.1.1 Installing Git

Installing Git: Windows

Installing Git: OS X

Installing Git: Debian or Ubuntu

Installing Git: Fedora

Installing Git: openSUSE

8.1.2 Starting our Git repository

Git tracks changes to a particular set of files in what's called a code repository (or "repo" for
short). Let's start one for our project. Open up your console and run these commands, in the
djangogirls​ directory:

Note Check your current working directory with a ​pwd (Mac OS X/Linux) or ​cd (Windows)
command before initializing the repository. You should be in the ​djangogirls​ folder.

command-line
$ git init

Initialized empty Git repository in ~/djangogirls/.git/

$ git config --global user.name "Your Name"

$ git config --global user.email you@example.com

28
SISTec/BE/CS/2019/7/Project 1/01

Initializing the git repository is something we need to do only once per project (and you won't
have to re-enter the username and email ever again).

Git will track changes to all the files and folders in this directory, but there are some files we
want it to ignore. We do this by creating a file called ​.gitignore in the base directory. Open up
your editor and create a new file with the following contents:

.gitignore
*.pyc

*~

__pycache__

myvenv

db.sqlite3

/static

.DS_Store

And save it as ​.gitignore​ in the "djangogirls" folder.

Note The dot at the beginning of the file name is important! If you're having any difficulty
creating it (Macs don't like you to create files that begin with a dot via the Finder, for example),
then use the "Save As" feature in your editor; it's bulletproof. And be sure not to add ​.txt​, ​.py​,
or any other extension to the file name -- it will only be recognized by Git if the name is just
.gitignore​.

Note One of the files you specified in your ​.gitignore file is ​db.sqlite3​. That file is your local
database, where all of your users and posts are stored. We'll follow standard web programming

29
SISTec/BE/CS/2019/7/Project 1/01

practice, meaning that we'll use separate databases for your local testing site and your live
website on PythonAnywhere. The PythonAnywhere database could be SQLite, like your
development machine, but usually you will use one called MySQL which can deal with a lot
more site visitors than SQLite. Either way, by ignoring your SQLite database for the GitHub
copy, it means that all of the posts and superuser you created so far are going to only be
available locally, and you'll have to create new ones on production. You should think of your
local database as a good playground where you can test different things and not be afraid that
you're going to delete your real posts from your blog.

It's a good idea to use a ​git status command before ​git add or whenever you find yourself
unsure of what has changed. This will help prevent any surprises from happening, such as
wrong files being added or committed. The ​git status command returns information about any
untracked/modified/staged files, the branch status, and much more. The output should be
similar to the following:

command-line
$ git status

On branch master

Initial commit

Untracked files:

(use "git add <file>..." to include in what will be committed)

.gitignore

blog/

manage.py

30
SISTec/BE/CS/2019/7/Project 1/01

mysite/

requirements.txt

nothing added to commit but untracked files present (use "git add" to track)

And finally we save our changes. Go to your console and run these commands:

command-line

$ git add --all .

$ git commit -m "My Django Girls app, first commit"

[...]

13 files changed, 200 insertions(+)

create mode 100644 .gitignore

[...]

create mode 100644 mysite/wsgi.py

8.1.3 Pushing your code to GitHub

Go to ​GitHub.com and sign up for a new, free user account. (If you already did that in the
workshop prep, that is great!) Be sure to remember your password (add it to your password
manager, if you use one).

Then, create a new repository, giving it the name "my-first-blog". Leave the "initialize with a
README" checkbox unchecked, leave the .gitignore option blank (we've done that manually)
and leave the License as None.

31
SISTec/BE/CS/2019/7/Project 1/01

On the next screen, you'll be shown your repo's clone URL,

Now we need to hook up the Git repository on your computer to the one up on GitHub.

Type the following into your console (replace ​<your-github-username> with the username you
entered when you created your GitHub account, but without the angle-brackets -- the URL
should match the clone URL you just saw):

command-line
$ git remote add origin https://github.com/<your-github-username>/my-first-blog.git

$ git push -u origin master

When you push to GitHub, you'll be asked for your GitHub username and password (either
right there in the command-line window or in a pop-up window), and after entering credentials
you should see something like this:

command-line
Counting objects: 6, done.

Writing objects: 100% (6/6), 200 bytes | 0 bytes/s, done.

Total 3 (delta 0), reused 0 (delta 0)

To https://github.com/ola/my-first-blog.git

* [new branch] master -> master

Branch master set up to track remote branch master from origin.

32
SISTec/BE/CS/2019/7/Project 1/01

Your code is now on GitHub. Go and check it out! You'll find it's in fine company – ​Django​,
the ​Django Girls Tutorial​, and many other great open source software projects also host their
code on GitHub. :)

8.2 SETTING UP OUR BLOG ON PYTHONANYWHERE

8.2.2 Sign up for a PythonAnywhere account

Note You might have already created a PythonAnywhere account earlier during the install steps
– if so, no need to do it again.

PythonAnywhere is a service for running Python code on servers "in the cloud". We'll use it for
hosting our site, live and on the Internet.

We will be hosting the blog we're building on PythonAnywhere. Sign up for a "Beginner"
account on PythonAnywhere (the free tier is fine, you don't need a credit card).

● www.pythonanywhere.com

33
SISTec/BE/CS/2019/7/Project 1/01

Note When choosing your username here, bear in mind that your blog's URL will take the form
yourusername.pythonanywhere.com​, so choose either your own nickname or a name for what
your blog is all about. Also, be sure to remember your password (add it to your password
manager, if you use one).

8.2.2. Creating a PythonAnywhere API token

This is something you only need to do once. When you've signed up for PythonAnywhere,
you'll be taken to your dashboard. Find the link near the top right to your "Account" page, then
select the tab named "API token", and hit the button that says "Create new API token".

8.2.3 Configuring our site on PythonAnywhere

Go back to the main ​PythonAnywhere Dashboard by clicking on the logo, and choose the
option to start a "Bash" console – that's the PythonAnywhere version of a command line, just
like the one on your computer.

Note PythonAnywhere is based on Linux, so if you're on Windows, the console will look a
little different from the one on your computer.

Deploying a web app on PythonAnywhere involves pulling down your code from GitHub, and
then configuring PythonAnywhere to recognize it and start serving it as a web application.
There are manual ways of doing it, but PythonAnywhere provides a helper tool that will do it
all for you. Let's install it first:

PythonAnywhere command-line
$ pip3.6 install --user pythonanywhere

34
SISTec/BE/CS/2019/7/Project 1/01

That should print out some things like ​Collecting pythonanywhere​, and eventually end with a
line saying ​Successfully installed (...) pythonanywhere- (...)​.

Now we run the helper to automatically configure our app from GitHub. Type the following
into the console on PythonAnywhere (don't forget to use your GitHub username in place of
<your-github-username>​, so that the URL matches the clone URL from GitHub):

PythonAnywhere command-line
$ pa_autoconfigure_django.py --python=3.6
https://github.com/<your-github-username>/my-first-blog.git

As you watch that running, you'll be able to see what it's doing:

● Downloading your code from GitHub


● Creating a virtualenv on PythonAnywhere, just like the one on your own computer
● Updating your settings file with some deployment settings
● Setting up a database on PythonAnywhere using the ​manage.py migrate​ command
● Setting up your static files (we'll learn about these later)
● And configuring PythonAnywhere to serve your web app via its API

On PythonAnywhere all those steps are automated, but they're the same steps you would have
to go through with any other server provider.

The main thing to notice right now is that your database on PythonAnywhere is actually totally
separate from your database on your own computer, so it can have different posts and admin
accounts. As a result, just as we did on your own computer, we need to initialize the admin
account with ​createsuperuser​. PythonAnywhere has automatically activated your virtualenv for
you, so all you need to do is run:

PythonAnywhere command-line

35
SISTec/BE/CS/2019/7/Project 1/01

(ola.pythonanywhere.com) $ python manage.py createsuperuser

Type in the details for your admin user. Best to use the same ones as you're using on your own
computer to avoid any confusion, unless you want to make the password on PythonAnywhere
more secure.

Now, if you like, you can also take a look at your code on PythonAnywhere using ​ls​:

PythonAnywhere command-line
(ola.pythonanywhere.com) $ ls

blog db.sqlite3 manage.py mysite requirements.txt static

(ola.pythonanywhere.com) $ ls blog/

__init__.py __pycache__ admin.py apps.py migrations models.py

tests.py views.py

You can also go to the "Files" page and navigate around using PythonAnywhere's built-in file
browser. (From the Console page, you can get to other PythonAnywhere pages from the menu
button in the upper right corner. Once you're on one of the pages, there are links to the other
ones near the top.)

8.3 You are now live!

Your site should now be live on the public Internet! Click through to the PythonAnywhere
"Web" page to get a link to it. You can share this with anyone you want :)

Note This is a beginners' tutorial, and in deploying this site we've taken a few shortcuts which
aren't ideal from a security point of view. If and when you decide to build on this project, or

36
SISTec/BE/CS/2019/7/Project 1/01

start a new project, you should review the ​Django deployment checklist for some tips on
securing your site.

37
SISTec/BE/CS/2019/7/Project 1/01

REFERENCES
JOURNALS / RESEARCH PAPERS

1. Pereira S, Pinto A, Alves V, Silva CA. Brain tumor segmentation using convolutional
neural networks in MRI images. IEEE T Med Imaging 2016; 35: 1240-1251
2. Dandıl E, Çakıroğlu M, Ekşi Z. Computer-aided diagnosis of malign and benign brain
tumors on MR images. In: ICT Innovations 2014; 18-23 September 2017; Skopje,
Macedonia. pp. 157-166.
3. Gao XW, Hui R, Tian Z. Classification of CT brain images based on deep learning
networks. Comput Meth Prog Bio 2017; 138: 49-56

WEBSITES (with exact URL up to a page)

4. Medical Image Analysis with Deep Learning — I - Taposh Dutta-Roy - Medium


5. Medical Image Analysis with Deep Learning - Towards Data Science
6. DICOM to JPG and extract all patient's information using python.
7. ​Deep Learning with Magnetic Resonance and Computed Tomography Images

38
SISTec/BE/CS/2019/7/Project 1/01

APPENDIX-1 GLOSSARY OF TERMS


(In alphabetical order)

A
ANN An artificial neuron network (ANN) is a computational model based on the
structure and functions of biological neural networks. Information that flows
through the network affects the structure of the ANN.
AI Artificial intelligence (AI) is the simulation of human intelligence processes
by machines, especially computer systems. These processes include learning
(the acquisition of information and rules for using the information), reasoning
(using rules to reach approximate or definite conclusions) and self-correction.

C
CNN Convolutional Neural Network
CUDA Compute Unified Device Architecture is a ​parallel computing platform and
application programming interface​ (API) model created by ​Nvidia​.

D
DNN Deep Neural Network
DICOM Digital Imaging and Communications in Medicine

G
GPU Graphics Processing Unit

M
MRI Magnetic resonance imaging

39

Potrebbero piacerti anche