Sei sulla pagina 1di 70

Chapter 1

Introduction
There are many applications related to brain imaging that either require, or benefit from, the ability to
accurately segment brain from non-brain tissue. Brain magnetic resonance image (MRI) analysis in
general is an important aim of medical image analysis, and it is used to extract exact information for
diagnosis and treatment of disease. In case of medical diagnosis, MRI are important to detect and
segmentation of brain tumors and brain tumors are one of the most common disease. Tumor is one of
the most common brain diseases, so its diagnosis and treatment have a vital importance for more than
400000 persons per year in the world (based on the World Health Organization (WHO) estimates). On
the other hand, in recent years, developments in medical imaging techniques allow us to use them in
several domains of medicine, for example, computer aided pathologies diagnosis, follow-up of these
pathologies, surgical planning, surgical guidance, statistical and time series (longitudinal) analysis.
Among all the medical image modalities, Magnetic Resonance Imaging (MRI) is the most frequently
used imaging technique in neuroscience and neurosurgery for these applications. MRI creates a image
which perfectly visualizes anatomic structures of the brain such as deep structures and tissues of the
brain, as well as the pathologies.

Segmentation of objects, mainly anatomical structures and pathologies from MR images is a


fundamental task, since the results often become the basis for other applications. Methods for
performing segmentation vary widely depending on the specific application and image modality.
Moreover, the segmentation of medical images is a challenging task, because they usually involve a
large amount of data, they have sometimes some artifacts due to patient’s motion or limited acquisition
time and soft tissue boundaries are usually not well defined.

When dealing with brain tumors, other problems arise, which make their segmentation more difficult.
There is a large class of tumor types which have a variety of shapes and sizes. They may appear at any
location and in different image intensities. Some of them may also deform the surrounding structures or
may be associated to edema or necrosis that change the image intensities around the tumor. In addition,
the existence of several MR acquisition protocols provides different information on the brain. Each
image usually highlights a particular region of the tumor. Thus, automated segmentation with prior
models or using prior knowledge is difficult to implement.

In this chapter we review some of the brain and tumor characteristics that are useful for the detection,
segmentation and interpretation of brain tumors and their surrounding structures in magnetic resonance
(MR) images. This chapter starts with an overview of medical image processing and CAD system.
Section 1.2 gives a view of medical image processing on brain. In Sections 1.3 and 1.4 gives a overview
of challenges for medical image processing and motivation and scope of it. Finally in Section 1.5
outline is given

1
1.1 Medical Image Processing and CAD System
1.1.1 Medical Image
Medical imaging is the technique and process used to create images of the human body (or parts and
function thereof) for clinical purposes (medical procedures seeking to reveal, diagnose, or examine
disease) or medical science (including the study of normal anatomy and physiology). Although imaging
of removed organs and tissues can be performed for medical reasons, such procedures are not usually
referred to as medical imaging, but rather are a part of pathology.

As a discipline and in its widest sense, it is part of biological imaging and incorporates radiology (in the
wider sense), nuclear medicine, investigative radiological sciences, endoscopy, (medical)
thermography, medical photography and microscopy (e.g. for human pathological investigations).

Measurement and recording techniques which are not primarily designed to produce images, such as
electroencephalography (EEG), magnetoencephalography (MEG), electrocardiography (EKG) and
others, but which produce data susceptible to be represented as maps (i.e. containing positional
information), can be seen as forms of medical imaging.

In the clinical context, "invisible light" medical imaging is generally equated to radiology or "clinical
imaging" and the medical practitioner responsible for interpreting (and sometimes acquiring) the images
is a radiologist. "Visible light" medical imaging involves digital video or still pictures that can be seen
without special equipment. Dermatology and wound care are two modalities that utilize visible light
imagery. Diagnostic radiography designates the technical aspects of medical imaging and in particular
the acquisition of medical images. The radiographer or radiologic technologist is usually responsible
for acquiring medical images of diagnostic quality, although some radiological interventions are
performed by radiologists. While radiology is an evaluation of anatomy, nuclear medicine provides
functional assessment.

Medical imaging is often perceived to designate the set of techniques that noninvasively produce
images of the internal aspect of the body. In this restricted sense, medical imaging can be seen as the
solution of mathematical inverse problems. This means that cause (the properties of living tissue) is
inferred from effect (the observed signal). In the case of ultrasonography the probe consists of
ultrasonic pressure waves and echoes inside the tissue show the internal structure. In the case of
projection radiography, the probe is X-ray radiation which is absorbed at different rates in different
tissue types such as bone, muscle and fat.

A. Imaging Technology

 Radiography: Two forms of radiographic images are in use in medical imaging;


projection radiography and fluoroscopy, with the latter being useful for catheter guidance.
These 2D techniques are still in wide use despite the advance of 3D tomography due to the
low cost, high resolution, and depending on application, lower radiation dosages. This
imaging modality utilizes a wide beam of x rays for image acquisition and is the first
imaging technique available in modern medicine.

 Fluoroscopy produces real-time images of internal structures of the body in a similar


fashion to radiography, but employs a constant input of x-rays, at a lower dose rate.
Contrast media, such as barium, iodine, and air are used to visualize internal organs as
they work. Fluoroscopy is also used in image-guided procedures when constant
feedback during a procedure is required. An image receptor is required to convert the
radiation into an image after it has passed through the area of interest. Early on this was
a fluorescing screen, which gave way to an Image Amplifier (IA) which was a large

2
vacuum tube that had the receiving end coated with cesium iodide, and a mirror at the
opposite end. Eventually the mirror was replaced with a TV camera.

 Projectional radiographs, more commonly known as x-rays, are often used to


determine the type and extent of a fracture as well as for detecting pathological changes
in the lungs. With the use of radio-opaque contrast media, such as barium, they can also
be used to visualize the structure of the stomach and intestines - this can help diagnose
ulcers or certain types of colon cancer.

 Magnetic Resonance Imaging (MRI): A magnetic resonance imaging instrument (MRI


scanner), or "nuclear magnetic resonance (NMR) imaging" scanner as it was originally
known, uses powerful magnets to polarise and excite hydrogen nuclei (single proton) in
water molecules in human tissue, producing a detectable signal which is spatially encoded,
resulting in images of the body. The MRI machine emits an RF (radio frequency) pulse
that specifically binds only to hydrogen. The system sends the pulse to the area of the body
to be examined. The pulse makes the protons in that area absorb the energy needed to
make them spin in a different direction. This is the “resonance” part of MRI. The RF pulse
makes them (only the one or two extra unmatched protons per million) spin at a specific
frequency, in a specific direction. The particular frequency of resonance is called the
Larmour frequency and is calculated based on the particular tissue being imaged and the
strength of the main magnetic field. MRI uses three electromagnetic fields: a very strong
(on the order of units of teslas) static magnetic field to polarize the hydrogen nuclei, called
the static field; a weaker time-varying (on the order of 1 kHz) field(s) for spatial encoding,
called the gradient field(s); and a weak radio-frequency (RF) field for manipulation of the
hydrogen nuclei to produce measurable signals, collected through an RF antenna.

Figure 1.1: MRI image

Like CT, MRI traditionally creates a two dimensional image of a thin "slice" of the body
and is therefore considered a tomographic imaging technique. Modern MRI instruments
are capable of producing images in the form of 3D blocks, which may be considered a
generalisation of the single-slice, tomographic, concept. Unlike CT, MRI does not involve
the use of ionizing radiation and is therefore not associated with the same health hazards.
For example, because MRI has only been in use since the early 1980s, there are no known
long-term effects of exposure to strong static fields (this is the subject of some debate; see
'Safety' in MRI) and therefore there is no limit to the number of scans to which an
individual can be subjected, in contrast with X-ray and CT. However, there are well-
identified health risks associated with tissue heating from exposure to the RF field and the
presence of implanted devices in the body, such as pace makers. These risks are strictly
controlled as part of the design of the instrument and the scanning protocols used.

Because CT and MRI are sensitive to different tissue properties, the appearance of the
images obtained with the two techniques differ markedly. In CT, X-rays must be blocked
by some form of dense tissue to create an image, so the image quality when looking at soft

3
tissues will be poor. In MRI, while any nucleus with a net nuclear spin can be used, the
proton of the hydrogen atom remains the most widely used, especially in the clinical
setting, because it is so ubiquitous and returns a large signal. This nucleus, present in water
molecules, allows the excellent soft-tissue contrast achievable with MRI.

 Fiduciary Markers: Electromagnetic fiducial transponder beacons are placed in the


prostate bed after prostatectomy to improve radiation oncology accuracy. Transponders are
placed in a triangular separation pattern, at least one centimeter apart from one another.
There is one beacon on either lateral aspect of the vesicoureteral anastomosis and one in
the retrovesical tissue approximately at the level where the seminal vesicles had been.

Fiduciary markers are used in a wide range of medical imaging applications. Images of the
same subject produced with two different imaging systems may be correlated (called image
registration) by placing a fiduciary marker in the area imaged by both systems. In this case,
a marker which is visible in the images produced by both imaging modalities must be used.
By this method, functional information from SPECT or positron emission tomography can
be related to anatomical information provided by magnetic resonance imaging (MRI.
Similarly, fiducial points established during MRI can be correlated with brain images
generated by magnetoencephalography to localize the source of brain activity.

 Nuclear medicine: Nuclear medicine encompasses both diagnostic imaging and treatment
of disease, and may also be referred to as molecular medicine or molecular imaging &
therapeutics. Nuclear medicine uses certain properties of isotopes and the energetic
particles emitted from radioactive material to diagnose or treat various pathology.
Different from the typical concept of anatomic radiology, nuclear medicine enables
assessment of physiology. This function-based approach to medical evaluation has useful
applications in most subspecialties, notably oncology, neurology, and cardiology. Gamma
cameras are used in e.g. scintigraphy, SPECT and PET to detect regions of biologic
activity that may be associated with disease. Relatively short lived isotope, such as 123I is
administered to the patient. Isotopes are often preferentially absorbed by biologically
active tissue in the body, and can be used to identify tumors or fracture points in bone.
Images are acquired after collimated photons are detected by a crystal that gives off a light
signal, which is in turn amplified and converted into count data.

 Scintigraphy ("scint") is a form of diagnostic test wherein radioisotopes are taken


internally, for example intravenously or orally. Then, gamma cameras capture and
form two-dimensional images from the radiation emitted by the
radiopharmaceuticals.

 SPECT is a 3D tomographic technique that uses gamma camera data from many
projections and can be reconstructed in different planes. A dual detector head
gamma camera combined with a CT scanner, which provides localization of
functional SPECT data, is termed a SPECT/CT camera, and has shown utility in
advancing the field of molecular imaging. In most other medical imaging modalities,
energy is passed through the body and the reaction or result is read by detectors. In
SPECT imaging, the patient is injected with a radioisotope, most commonly
Thallium 201TI, Technetium 99mTC, Iodine 123I, and Gallium 67Ga.

The radioactive gamma rays are emitted through the body as the natural decaying process
of these isotopes takes place. The emissions of the gamma rays are captured by detectors
that surround the body. This essentially means that the human is now the source of the
radioactivity, rather than the medical imaging devices such as X-Ray, CT, or Ultrasound.

 Positron Emission Tomography (PET) uses coincidence detection to image


18
functional processes. Short-lived positron emitting isotope, such as F, is

4
incorporated with an organic substance such as glucose, creating F18-
fluorodeoxyglucose, which can be used as a marker of metabolic utilization. Images
of activity distribution throughout the body can show rapidly growing tissue, like
tumor, metastasis, or infection. PET images can be viewed in comparison to
computed tomography scans to determine an anatomic correlate. Modern scanners
combine PET with a CT, or even MRI, to optimize the image reconstruction
involved with positron imaging. This is performed on the same equipment without
physically moving the patient off of the gantry. The resultant hybrid of functional
and anatomic imaging information is a useful tool in non-invasive diagnosis and
patient management.

 Photoacoustic Imaging: Photoacoustic imaging is a recently developed hybrid biomedical


imaging modality based on the photoacoustic effect. It combines the advantages of optical
absorption contrast with ultrasonic spatial resolution for deep imaging in (optical) diffusive
or quasi-diffusive regime. Recent studies have shown that photoacoustic imaging can be
used in vivo for tumor angiogenesis monitoring, blood oxygenation mapping, functional
brain imaging, and skin melanoma detection, etc.

 Breast Thermography: Digital infrared imaging thermography is based on the principle


that metabolic activity and vascular circulation in both pre-cancerous tissue and the area
surrounding a developing breast cancer is almost always higher than in normal breast
tissue. Cancerous tumors require an ever-increasing supply of nutrients and therefore
increase circulation to their cells by holding open existing blood vessels, opening dormant
vessels, and creating new ones (neo-angiogenesis). This process frequently results in an
increase in regional surface temperatures of the breast.

 Tomography: Tomography is the method of imaging a single plane, or slice, of an object


resulting in a tomogram. There are several forms of tomography:

 Linear tomography: This is the most basic form of tomography. The X-ray tube
moved from point "A" to point "B" above the patient, while the cassette holder (or
"bucky") moves simultaneously under the patient from point "B" to point "A." The
fulcrum, or pivot point, is set to the area of interest. In this manner, the points above
and below the focal plane are blurred out, just as the background is blurred when
panning a camera during exposure. No longer carried out and replaced by computed
tomography.

 Poly tomography: This was a complex form of tomography. With this technique, a
number of geometrical movements were programmed, such as hypocycloidic, circular,
figure 8, and elliptical. Philips Medical Systems produced one such device called the
'Polytome.' This unit was still in use into the 1990s, as its resulting images for small or
difficult physiology, such as the inner ear, was still difficult to image with CTs at that
time. As the resolution of CTs got better, this procedure was taken over by the CT.

 Zonography: This is a variant of linear tomography, where a limited arc of movement


is used. It is still used in some centres for visualising the kidney during an intravenous
urogram (IVU).

 Orthopantomography (OPT or OPG): The only common tomographic examination


in use. This makes use of a complex movement to allow the radiographic examination
of the mandible, as if it were a flat bone. It is often referred to as a "Panorex", but this is
incorrect, as it is a trademark of a specific company.

 Computed Tomography (CT), or Computed Axial Tomography (CAT): A CT scan,


also known as a CAT scan), is a helical tomography (latest generation), which

5
traditionally produces a 2D image of the structures in a thin section of the body. It uses
X-rays. It has a greater ionizing radiation dose burden than projection radiography;
repeated scans must be limited to avoid health effects. CT is based on the same
principles as X-Ray projections but in this case, the patient is enclosed in a surrounding
ring of detectors assigned with 500-1000 scintillation detectors (fourth-generation X-
Ray CT scanner geometry). Previously in older generation scanners, the X-Ray beam
was paired by a translating source and detector.

 Ultrasound: Medical ultrasonography uses high frequency broadband sound waves in the
megahertz range that are reflected by tissue to varying degrees to produce (up to 3D)
images. This is commonly associated with imaging the fetus in pregnant women. Uses of
ultrasound are much broader, however. Other important uses include imaging the
abdominal organs, heart, breast, muscles, tendons, arteries and veins. While it may provide
less anatomical detail than techniques such as CT or MRI, it has several advantages which
make it ideal in numerous situations, in particular that it studies the function of moving
structures in real-time, emits no ionizing radiation, and contains speckle that can be used in
elastography. Ultrasound is also used as a popular research tool for capturing raw data, that
can be made available through an Ultrasound research interface, for the purpose of tissue
characterization and implementation of new image processing techniques. The concepts of
ultrasound differ from other medical imaging modalities in the fact that it is operated by
the transmission and receipt of sound waves. The high frequency sound waves are sent into
the tissue and depending on the composition of the different tissues; the signal will be
attenuated and returned at separate intervals. A path of reflected sound waves in a
multilayered structure can be defined by an input acoustic impedance( Ultrasound sound
wave) and the Reflection and transmission coefficients of the relative structures. It is very
safe to use and does not appear to cause any adverse effects, although information on this
is not well documented. It is also relatively inexpensive and quick to perform. Ultrasound
scanners can be taken to critically ill patients in intensive care units, avoiding the danger
caused while moving the patient to the radiology department. The real time moving image
obtained can be used to guide drainage and biopsy procedures. Doppler capabilities on
modern scanners allow the blood flow in arteries and veins to be assessed.

B. Medical imaging topics

 Maximizing imaging procedure use: The amount of data obtained in a single MR or CT


scan is very extensive. Some of the data that radiologists discard could save patients time
and money, while reducing their exposure to radiation and risk of complications from
invasive procedures.

 Creation of three-dimensional images: Recently, techniques have been developed to enable


CT, MRI and ultrasound scanning software to produce 3D images for the physician.
Traditionally CT and MRI scans produced 2D static output on film. To produce 3D images,
many scans are made, then combined by computers to produce a 3D model, which can then
be manipulated by the physician. 3D ultrasounds are produced using a somewhat similar
technique. In diagnosing disease of the viscera of abdomen, ultrasound is particularly
sensitive on imaging of biliary tract, urinary tract and female reproductive organs (ovary,

6
fallopian tubes). As for example, diagnosis of gall stone by dilatation of common bile duct
and stone in common bile duct . With the ability to visualize important structures in great
detail, 3D visualization methods are a valuable resource for the diagnosis and surgical
treatment of many pathologies. It was a key resource for the famous, but ultimately
unsuccessful attempt by Singaporean surgeons to separate Iranian twins Ladan and Laleh
Bijani in 2003. The 3D equipment was used previously for similar operations with great
success.

Other proposed or developed techniques include:


 Diffuse optical tomography
 Elastography
 Electrical impedance tomography
 Optoacoustic imaging
 Ophthalmology
o A-scan
o B-scan
o Corneal topography
o Optical coherence tomography
o Scanning laser ophthalmoscopy

Some of these techniques are still at a research stage and not yet used in clinical routines.

 Compression of medical images: Medical imaging techniques produce very large amounts of
data, especially from CT, MRI and PET modalities. As a result, storage and communications of
electronic image data are prohibitive without the use of compression. JPEG 2000 is the state-of-
the-art image compression DICOM standard for storage and transmission of medical images.
The cost and feasibility of accessing large image data sets over low or various bandwidths are
further addressed by use of another DICOM standard, called JPIP, to enable efficient streaming
of the JPEG 2000 compressed image data.

 Non-diagnostic imaging: Neuroimaging has also been used in experimental circumstances to


allow people (especially disabled persons) to control outside devices, acting as a brain computer
interface.

 Archiving and recording: Used primarily in ultrasound imaging, capturing the image
produced by a medical imaging device is required for archiving and telemedicine applications.
In most scenarios, a frame grabber is used in order to capture the video signal from the medical
device and relay it to a computer for further processing and operations.

 Medical Imaging in the Cloud: There has been growing trend to migrate from PACS to a
Cloud Based RIS. A recent article by Applied Radiology said, "As the digital-imaging realm is
embraced across the healthcare enterprise, the swift transition from terabytes to petabytes of
data has put radiology on the brink of information overload. Cloud computing offers the
imaging department of the future the tools to manage data much more intelligently."

 Use in pharmaceutical clinical trials: Medical imaging has become a major tool in clinical
trials since it enables rapid diagnosis with visualization and quantitative assessment.

A typical clinical trial goes through multiple phases and can take up to eight years. Clinical
endpoints or outcomes are used to determine whether the therapy is safe and effective. Once a
patient reaches the endpoint, he/she is generally excluded from further experimental interaction.
Trials that rely solely on clinical endpoints are very costly as they have long durations and tend to
need large number of patients.

7
In contrast to clinical endpoints, surrogate endpoints have been shown to cut down the time
required to confirm whether a drug has clinical benefits. Imaging biomarkers (a characteristic that
is objectively measured by an imaging technique, which is used as an indicator of pharmacological
response to a therapy) and surrogate endpoints have shown to facilitate the use of small group
sizes, obtaining quick results with good statistical power.
Imaging is able to reveal subtle change that is indicative of the progression of therapy that may be
missed out by more subjective, traditional approaches. Statistical bias is reduced as the findings are
evaluated without any direct patient contact.

For example, measurement of tumour shrinkage is a commonly used surrogate endpoint in solid
tumour response evaluation. This allows for faster and more objective assessment of the effects of
anticancer drugs. In evaluating the extent of Alzheimer’s disease, it is still prevalent to use
behavioural and cognitive tests. MRI scans on the entire brain can accurately pinpoint hippocampal
atrophy rate while PET scans is able to measure the brain’s metabolic activity by measuring
regional glucose metabolism.

An imaging-based trial will usually be made up of three components:

1. A realistic imaging protocol. The protocol is an outline that standardizes (as far as
practically possible) the way in which the images are acquired using the various modalities
(PET, SPECT, CT, MRI). It covers the specifics in which images are to be stored, processed
and evaluated.

2. An imaging centre that is responsible for collecting the images, perform quality control and
provide tools for data storage, distribution and analysis. It is important for images acquired
at different time points are displayed in a standardised format to maintain the reliability of
the evaluation. Certain specialised imaging contract research organizations provide to end
medical imaging services, from protocol design and site management through to data quality
assurance and image analysis.

3. Clinical sites that recruit patients to generate the images to send back to the imaging centre.

1.1.2 CAD System

In radiology, computer-Aided Detection (CAD), also called computer-aided diagnosis (CAD), are
procedures in medicine that assist doctors in the interpretation of medical images. Imaging techniques
in X-ray, MRI, and Ultrasound diagnostics yield a great deal of information, which the radiologist has
to analyze and evaluate comprehensively in a short time. CAD systems help scan digital images, e.g.
from computed tomography, for typical appearances and to highlight conspicuous sections, such as
possible diseases.

Computer-Aided Diagnosis (CAD) system used for Automatic detection of brain tumor through MRI.
The CAD system can provide the valuable outlook and accuracy of brain tumor detection. It consists of
two stages. First stage has preprocessing and enhancement. Second, feature extraction, feature selection,
classification, and performance analysis are compared and studied. Preprocessing and enhancement
techniques are used to improve the detection of the suspicious regions in MRI. The enhancement
method consists of three processing steps: first, the MRI image is acquired. Second, removal of film
artificates such as labels and marks on the MRI image and finally the high frequency components are
removed. Segmentation describes separation of suspicious region from background MRI image.

8
A. Computer-Aided Diagnosis Topics

 Methodology: CAD is fundamentally based on highly complex pattern recognition. X-ray


images are scanned for suspicious structures. Normally a few thousand images are required
to optimize the algorithm. Digital image data are copied to a CAD server in a DICOM-
format and are prepared and analyzed in several steps.
 Preprocessing for
o Reduction of artifacts (bugs in images)
o Image noise reduction
o Leveling (harmonization) of image quality for clearing the image's different basic
o conditions e.g. different exposure parameter.

 Segmentation for
o Differentiation of different structures in the image, e.g. heart, lung, ribcage,
possible round lesions
o Matching with anatomic databank

 Structure/ROI (Region of Interest) Analyze Every detected region is analyzed


individually for special characteristics:
o Compactness
o Form, size and location
o Reference to close-by structures / ROIs
o Average greylevel value analyze within a ROI
o Proportion of greylevels to border of the structure inside the ROI

 Evaluation / classification After the structure is analyzed, every ROI is evaluated


individually (scoring) for the probability of a TP. Therefore the procedures are:
o Nearest-Neighbor Rule
o Minimum distance classifier
o Cascade Classifier
o Bayesian Classifier
o Multilayer perception
o Radial basis function network (RBF)
o SVM

If the detected structures have reached a certain threshold level, they are highlighted in the
image for the radiologist. Depending on the CAD system these markings can be
permanently or temporary saved. The latter's advantage is that only the markings which are
approved by the radiologist are saved. False hits should not be saved, because an
examination at a later date becomes more difficult then.

 Sensitivity and specificity: CAD systems seek to highlight suspicious structures. Today's CAD
systems cannot detect 100% of pathological changes. The hit rate (sensitivity) can be up to 90%
depending on system and application. A correct hit is termed a True Positive (TP), while the
incorrect marking of healthy sections constitutes a False Positive (FP). The less FPs indicated,
the higher the specificity is. A low specificity reduces the acceptance of the CAD system
because the user has to identify all of these wrong hits. The FP-rate in lung overview
examinations (CAD Chest) could be reduced to 2 per examination. In other segments (e.g. CT
lung examinations) the FP-rate could be 25 or more. In CAST systems the FP rate must be
extremely low (less than 1 per examination) to allow a meaningful study triage.

9
 Absolute detection rate: The absolute detection rate of the radiologist is an alternative metric
to sensitivity and specificity. Overall, results of clinical trials about sensitivity, specificity, and
the absolute detection rate can vary markedly. Each study result depends on its basic conditions
and has to be evaluated on those terms. The following facts have a strong influence:

 Retrospective or prospective design


 Quality of the used images
 Condition of the x-ray examination
 Radiologist's experience and education
 Type of lesion
 Size of the considered lesion

B. Applications

CAD is used in the diagnosis of brain tumor, breast cancer, lung cancer, colon cancer, prostate
cancer, bone metastases, coronary artery disease and congenital heart defect.

1.2 Medical Image Processing on Brain


Brain is one of the most complex and sophisticated organs in human body. It is the center of nervous
system and responsible for motor actions, memory and intelligence in humans. During the past few
decades, with the increasing availability of relatively inexpensive computational resources, computed
tomography (CT), magnetic resonance imaging (MRI), doppler ultrasound, and various imaging
techniques based on nuclear emission (PET (positron emission tomography), SPECT (single photon
emission computed tomography), etc) have all been valuable additions to the radiologist’s arsenal of
imaging tools toward ever more reliable detection and diagnosis of disease. Study of structure, function
and disease in human brain is done by analyzing images obtained in different modalities like Magnetic
Resonance Imaging (MRI) and Computed Tomography (CT). Here, we mainly focus on the conceptual
overview of basic MR imaging.

1.2.1 Basic MR Imaging


Magnetic resonance imaging (MRI), nuclear magnetic resonance imaging (NMRI), or magnetic
resonance tomography (MRT) is a medical imaging technique used in radiology to visualize detailed
internal structures. MRI makes use of the property of nuclear magnetic resonance (NMR) to image
nuclei of atoms inside the body.

MRI provides good contrast between the different soft tissu es of the body, which makes it especially
useful in imaging the brain, muscles, the heart, and cancers compared with other medical imaging
techniques such as computed tomography (CT) or X-rays. Unlike CT scans or traditional X-rays, MRI
does not use ionizing radiation.

A. How MRI works:


The body is largely composed of water molecules. Each water molecule has two hydrogen nuclei or
protons. When a person is inside the powerful magnetic field of the scanner, the average magnetic
moment of many protons becomes aligned with the direction of the field. A radio frequency
transmitter is briefly turned on, producing a varying electromagnetic field. This electromagnetic field
has just the right frequency, known as the resonance frequency, to be absorbed and flip the spin of
the protons in the magnetic field. After the electromagnetic field is turned off, the spins of the

10
protons return to thermodynamic equilibrium and the bulk magnetization becomes re-aligned with
the static magnetic field. During this relaxation, a radio frequency signal is generated, which can be
measured with receive coils.

Information about the origin of the signal in 3D space can be learned by applying additional
magnetic fields during the scan. These fields, generated by passing electric currents through gradient
coils, make the magnetic field strength vary depending on the position within the magnet. Because
this makes the frequency of released radio signal also dependent on its origin in a predictable
manner, the distribution of protons in the body can be mathematically recovered from the signal,
typically by the use of the inverse Fourier transform.

Protons in different tissues return to their equilibrium state at different relaxation rates. Different
tissue variables, including spin density, T1 and T2 relaxation times and flow and spectral shifts can be
used to construct images. By changing the settings on the scanner, this effect is used to create
contrast between different types of body tissue or between other properties, as in fMRI and diffusion
MRI.

Contrast agents may be injected intravenously to enhance the appearance of blood vessels, tumors or
inflammation. Contrast agents may also be directly injected into a joint in the case of arthrograms,
MRI images of joints. Unlike CT, MRI uses no ionizing radiation and is generally a very safe
procedure. Nonetheless the strong magnetic fields and radio pulses can affect metal implants,
including cochlear implants and cardiac pacemakers. In the case of cochlear implants, the US FDA
has approved some implants for MRI compatibility. In the case of cardiac pacemakers, the results
can sometimes be lethal,[3] so patients with such implants are generally not eligible for MRI.

Since the gradient coils are within the bore of the scanner, there are large forces between them and
the main field coils, producing most of the noise that is heard during operation. Without efforts to
damp this noise, it can approach 130 decibels (dB) with strong fields. MRI is used to image every
part of the body, and is particularly useful for tissues with many hydrogen nuclei and little density
contrast, such as the brain, muscle, connective tissue and most tumors.

B. Basic MRI scans

Signal in MR images is high or low (bright or dark), depending on the pulse sequence used,
and the type of tissue in the image region of interest. The following is a general guide to
how tissue appears on T1- or T2- weighted images.
 T1-weighted MRI: T1-weighted scans are a standard basic scan, in particular differentiating fat
from water - with water darker and fat brighter[24] use a gradient echo (GRE) sequence, with
short TE and short TR. This is one of the basic types of MR contrast and is a commonly run
clinical scan. The T1 weighting can be increased (improving contrast) with the use of an
inversion pulse as in an MP-RAGE sequence. Due to the short repetition time (TR) this scan can
be run very fast allowing the collection of high resolution 3D datasets. A T1 reducing
gadolinium contrast agent is also commonly used, with a T1 scan being collected before and
after administration of contrast agent to compare the difference. In the brain T 1-weighted scans
provide good gray matter/white matter contrast; in other words, T1-weighted images highlight
fat deposition.
 Dark on T1-weighted image:
o Increased water, as in edema, tumor, infarction, inflammation, infection, hemorrhage
(hyperacute or chronic)
o Low proton density, calcification
o Flow void

11
 Bright on T1-weighted image:
o Fat
o Subacute hemorrhage
o Melanin
o Protein-rich fluid
o Slowly flowing blood
o Paramagnetic substances: gadolinium, manganese, copper
o Calcification (rarely)
o Laminar necrosis of cerebral infarction

 T2-weighted MRI: T2-weighted scans are another basic type. Like the T1-weighted scan, fat is
differentiated from water - but in this case fat shows darker, and water lighter. For example, in
the case of cerebral and spinal study, the CSF (cerebrospinal fluid) will be lighter in T 2-
weighted images. These scans are therefore particularly well suited to imaging edema, with
long TE and long TR. Because the spin echo sequence is less susceptible to inhomogeneities in
the magnetic field, these images have long been a clinical workhorse.

 Bright on T2-weighted image:


o Increased water, as in edema, tumor, infarction, inflammation, infection, subdural
collection
o Methemoglobin (extracellular) in subacute hemorrhage

 Dark on T2-weighted image:


o Low proton density, calcification, fibrous tissue
o Paramagnetic substances: deoxyhemoglobin, methemoglobin (intracellular), iron,
ferritin, hemosiderin, melanin.
o Protein-rich fluid
o Flow void

 T2*-weighted MRI: T2* weighted scans use a gradient echo (GRE) sequence, with long TE and
long TR. The gradient echo sequence used does not have the extra refocusing pulse used in spin
echo so it is subject to additional losses above the normal T2 decay (referred to as T2′), these
taken together are called T2*. This also makes it more prone to susceptibility losses at air/tissue
boundaries, but can increase contrast for certain types of tissue, such as venous blood.

 Spin density weighted MRI: Spin density, also called proton density, weighted scans try to
have no contrast from either T2 or T1 decay, the only signal change coming from differences in
the amount of available spins (hydrogen nuclei in water). It uses a spin echo or sometimes a
gradient echo sequence, with short TE and long TR.

Specialized MRI scans

 Diffusion MRI : Diffusion MRI measures the diffusion of water molecules in biological
tissues. In an isotropic medium (inside a glass of water for example), water molecules naturally
move randomly according to turbulence and Brownian motion. In biological tissues however,
where the Reynolds number is low enough for flows to be laminar, the diffusion may be
anisotropic. For example, a molecule inside the axon of a neuron has a low probability of
crossing the myelin membrane. Therefore the molecule moves principally along the axis of the
neural fiber. If it is known that molecules in a particular voxel diffuse principally in one
direction, the assumption can be made that the majority of the fibers in this area are going
parallel to that direction.

12
The recent development of diffusion tensor imaging (DTI) enables diffusion to be measured in
multiple directions and the fractional anisotropy in each direction to be calculated for each
voxel. This enables researchers to make brain maps of fiber directions to examine the
connectivity of different regions in the brain (using tractography) or to examine areas of neural
degeneration and demyelination in diseases like Multiple Sclerosis.

Another application of diffusion MRI is diffusion-weighted imaging (DWI). Following an


ischemic stroke, DWI is highly sensitive to the changes occurring in the lesion.[26] It is
speculated that increases in restriction (barriers) to water diffusion, as a result of cytotoxic
edema (cellular swelling), is responsible for the increase in signal on a DWI scan. The DWI
enhancement appears within 5–10 minutes of the onset of stroke symptoms (as compared with
computed tomography, which often does not detect changes of acute infarct for up to 4–6 hours)
and remains for up to two weeks. Coupled with imaging of cerebral perfusion, researchers can
highlight regions of "perfusion/diffusion mismatch" that may indicate regions capable of
salvage by reperfusion therapy.

Like many other specialized applications, this technique is usually coupled with a fast image
acquisition sequence, such as echo planar imaging sequence.

 Magnetization transfer MRI: Magnetization transfer (MT) refers to the transfer of longitudinal
magnetization from free water protons to hydration water protons in NMR and MRI.
In magnetic resonance imaging of molecular solutions, such as protein solutions, two types of
water molecules, free (bulk) and hydration (bound), are found. Free water protons have faster
average rotational frequency and hence less fixed water molecules that may cause local field
inhomogeneity. Because of this uniformity, most free water protons have resonance frequency
lying narrowly around the normal proton resonance frequency of 63 MHz (at 1.5 teslas). This
also results in slower transverse magnetization dephasing and hence longer T2. Conversely,
hydration water molecules are slowed down by interaction with solute molecules and hence
create field inhomogeneities that lead to wider resonance frequency spectrum.

In free liquids, protons, which may be viewed classically as small magnetic dipoles, exhibit
translational and rotational motions. These moving dipoles disturb the surrounding magnetic
field however on long enough time-scales (which may be nanoseconds) the average field caused
by the motion of protons is zero. This is known as “motional averaging” or narrowing and is
characteristic of protons moving freely in liquid. On the other hand, protons bound to
macromolecules, such as proteins, tend to have a fixed orientation and so the average magnetic
field in close proximity to such structures does not average to zero. The result is a spatial
pattern in the magnetic field that gives rise to a residual dipolar coupling (range of precession
frequencies) for the protons experiencing the magnetic field. The wide frequency distribution
appears as a broad spectrum that may be several kHz wide. The net signal from these protons
disappears very quickly, in inverse proportion to the width, due to the loss of coherence of the
spins, i.e. T2 relaxation. Due to exchange mechanisms, such as spin transfer or proton chemical
exchange, the (incoherent) spins bound to the macromolecules continually switch places with
(coherent) spins in the bulk media and establish a dynamic equilibrium.

Magnetization transfer: Although there is no measurable signal from the bound spins, or the
bound spins that exchange into the bulk media, their longitudinal magnetization is preserved
and may recover only via the relatively slow process of T1 relaxation. If the longitudinal
magnetization of just the bound spins can be altered, then the effect can be measured in the

13
spins of the bulk media due to the exchange processes. The magnetization transfer sequence
applies RF saturation at a frequency that is far off resonance for the narrow line of bulk water
but still on resonance for the bound protons with a spectral linewidth of kHz. This causes
saturation of the bound spins which exchange into the bulk water, resulting in a loss of
longitudinal magnetization and hence signal decrease in the bulk water. This provides an
indirect measure of macromolecular content in tissue. Implementation of magnetization transfer
involves choosing suitable frequency offsets and pulse shapes to saturate the bound spins
sufficiently strongly, within the safety limits of specific absorption rate for RF irradiation.

 T1rho MRI: T1ρ (T1rho): Molecules have a kinetic energy that is a function of the temperature
and is expressed as translational and rotational motions, and by collisions between molecules.
The moving dipoles disturb the magnetic field but are often extremely rapid so that the average
effect over a long time-scale may be zero. However, depending on the time-scale, the
interactions between the dipoles do not always average away. At the slowest extreme the
interaction time is effectively infinite and occurs where there are large, stationary field
disturbances (e.g. a metallic implant). In this case the loss of coherence is described as a "static
dephasing". T2* is a measure of the loss of coherence in an ensemble of spins that include all
interactions (including static dephasing). T2 is a measure of the loss of coherence that excludes
static dephasing, using an RF pulse to reverse the slowest types of dipolar interaction. There is
in fact a continuum of interaction time-scales in a given biological sample and the properties of
the refocusing RF pulse can be tuned to refocus more than just static dephasing. In general, the
rate of decay of an ensemble of spins is a function of the interaction times and also the power of
the RF pulse. This type of decay, occurring under the influence of RF, is known as T1ρ. It is
similar to T2 decay but with some slower dipolar interactions refocused as well as the static
interactions, hence T1ρ≥T2.

C. Risks

There are no known harmful effects from the strong magnetic field used for MRI. But the magnet is
very powerful. The magnet may affect pacemakers, artificial limbs, and other medical devices that
contain iron. The magnet will stop a watch that is close to the magnet. Any loose metal object has
the risk of causing damage or injury if it gets pulled toward the strong magnet. Metal parts in the
eyes can damage the retina. If you may have metal fragments in the eye, an X-ray of the eyes may
be done before the MRI. If metal is found, the MRI will not be done. Iron pigments in tattoos or
tattooed eyeliner can cause skin or eye irritation. An MRI can cause a burn with some medication
patches. Be sure to tell your health professional if you are wearing a patch. There is a slight risk of
an allergic reaction if contrast material is used during the MRI. But most reactions are mild and can
be treated using medicine. There also is a slight risk of an infection at the IV site.

1.2.2 Brain Tumor Classification

A. Anatomy of Brain

The brain is incredibly complex. The central nervous system (CNS) consists of the brain and the
spinal cord, immersed in the cerebrospinal fluid (CSF). Here we will show you the major parts,
where they are located, and some of that they are responsible for. Weighing about 3 pounds (1.4

14
kilograms), the brain consists of three main structures: the cerebrum, the cerebellum and the
brainstem.

Figure 1.2 Anatomy of Brain

 Cerebrum - The portion of the brain (located at the back) which helps coordinate
movement (balance and muscle coordination). It divided into two hemispheres (left and
right), each consists of four lobes-

 Frontal - Front part of the brain; involved in planning, organizing, problem


solving, selective attention, personality and a variety of "higher cognitive
functions" including behaviour and emotions.

 Parietal - One of the two parietal lobes of the brain located behind the frontal
lobe at the top of the brain. The parietal lobes contain the primary sensory
cortex which controls sensation (touch, pressure). Behind the primary sensory
cortex is a large association area that controls fine sensation (judgment of
texture, weight, size, shape).

 Occipital - Region in the back of the brain which processes visual information.
Not only is the occipital lobe mainly responsible for visual reception, it also
contains association areas that help in the visual recognition of shapes and
colors. Damage to this lobe can cause visual deficits.

 Temporal - There are two temporal lobes, one on each side of the brain located
at about the level of the ears. These lobes allow a person to tell one smell from
another and one sound from another. They also help in sorting new information
and are believed to be responsible for short-term memory.

The outer layer of the brain is known as the cerebral cortex or the ‘grey matter’. It
covers the nuclei deep within the cerebral hemisphere e.g. the basal ganglia; the
structure called the thalamus, and the ‘white matter’, which consists mostly of
myelinated axons.

15
 Grey matter – closely packed neuron cell bodies form the grey matter of the brain.
The grey matter contains specialised regions of the brain involved in muscle control,
sensory perceptions, such as seeing and hearing, memory, emotions and speech.

 White matter – neuronal tissue containing mainly long, myelinated axons, is known
as white matter or the diencephalon. Situated between the brainstem and cerebellum,
the white matter contains structures at the core of the brain such as the thalamus and
hypothalamus. The nuclei of the white matter are involved in the relay of sensory
information from the rest of the body to the cerebral cortex, as well as in the
regulation of autonomic (unconscious) functions such as body temperature, heart rate
and blood pressure. Certain nuclei within the white matter are involved in the
expression of emotions, the release of hormones from the pituitary gland, and in the
regulation of food and water intake. These nuclei are generally considered part of
the limbic system.

Figure 1.3 Axial view of a Brain with Gray matter and White matter

Cerebellum – responsible for psychomotor function, the cerebellum co-ordinates sensory input
from the inner ear and the muscles to provide accurate control of position and movement.

Brainstem – found at the base of the brain, it forms the link between the cerebral cortex, white
matter and the spinal cord. The brainstem contributes to the control of breathing, sleep and
circulation.

B. Brain Tumor

A brain tumor is an intracranial mass produced by an uncontrolled growth of cells either normally
found in the brain such as neurons, lymphatic tissue, glial cells, blood vessels, pituitary and
pineal gland, skull, or spread from cancers primarily located in other organs. Brain tumors are
classified based on the type of tissue involved, the location of the tumor, whether it is benign or
malignant, and other considerations. Primary (true) brain tumors are the tumors that originated in
the brain and are named for the cell types from which they originated. They can be benign (non
cancerous), meaning that they do not spread elsewhere or invade surrounding tissues. They can
also be malignant and invasive (spreading to neighboring area). Secondary or metastasis brain
tumors take their origin from tumor cells which spread to the brain from another location in the
body. Most often cancers that spread to the brain to cause secondary brain tumors originate in the
lumy, breast, kidney or from melanomas in the skin.

16
Figure 1.4 Brian Tumor

The brain is made up of many different types of cells. Brain cancers occur when one type of cell
transforms from its normal characteristics and grows and multiplies in an abnormal way.

 Primary tumor: Brain tumors that result from this transformation and abnormal growth of
brain cells are called primary brain tumors because they originate in the brain. Usually they are
named after the part of the brain or the type of brain cell from which they arise. Many of them
are benign and can be successfully removed. Malignant primary brain tumors cause problems
by spreading into the normal brain tissue thereby increasing the pressure and causing damage to
the surrounding areas of the brain. These tumors rarely spread outside the brain to other parts of
the body. The most common primary brain tumors are gliomas. They begin in glial cells. There
are many types of gliomas:
 Astrocytoma: The tumor arises from star-shaped glial cells called astrocytes. In adults,
astrocytomas most often arise in the cerebrum. In children, they occur in the brain stem,
the cerebrum, and the cerebellum.

 Brain stem glioma: The tumor occurs in the lowest part of the brain. Brain stem
gliomas most often are diagnosed in young children and middle-aged adults.

 Ependymoma: The tumor arises from cells that line the ventricles or the central canal
of the spinal cord. They are most commonly found in children and young adults.

 Oligodendroglioma: This rare tumor arises from cells that make the fatty substance
that covers and protects the nerves. These tumors usually occur in the cerebrum. They
grow slowly and usually do not spread into surrounding brain tissue. They are most
common in middle-aged adults.

Other primary tumors that do not begin in the glial cells are

 Medulloblastoma or primitive neuroectodermal tumor: This tumor usually arises in


the cerebellum. Rarely do these tumors spread outside the brain. It is the most common
brain tumor in children.

17
 Meningioma: This tumor arises in the meninges and grows slowly. Meningioma are
benign and do not spread from their original site. Malignant meningiomas are very
rare.

 Schwannoma: This tumor arises from the Schwann cells. These cells line the nerve
that controls balance and hearing. This nerve is in the inner ear. The tumor is also
called an acoustic neuroma. It occurs most often in adults. They are more common in
people who have a genetic disease called neurofibromatosis type

 Craniopharyngioma: The tumor grows at the base of the brain, near the pituitary
gland. This type of tumor most often occurs in children.

 Haemangioblastoma: This is a rare type of tumor that develops from cells that line
the blood vessels. They are benign and grow slowly.

 Pituitary tumors: These types of tumors develop in the Pituitary gland. They are
benign and are called pituitary adenomas.

 Germ cell tumor of the brain: The tumor arises from a germ cell. Most germ cell
tumors that arise in the brain occur in people younger than 30 years. The most
common type of germ cell tumor of the brain is a germinoma.

 Pineal region tumor: This rare brain tumor arises in or near the pineal gland. The
pineal gland is located between the cerebrum and the cerebellum. The most common
tumors are germinomas, teratomas, pineocytomas and pineoblastomas.

 Secondary tumors: Secondary brain tumors or metastatic tumor occur when cancer cells from
other parts of the body, such as the lung, breast, skin, kidney, colon spread to the brain. These
tumor cells reach the brain via the blood-stream. Secondary tumors in the brain are far more
ommon than primary brain tumors. About 25% of tumors elsewhere in the body metastasize to
the brain.

 Childhood brain tumor: Brain tumors are the most common solid tumors that occur in
children. Children of any age may be affected. Boys are affected more often than girls. Two
types of brain cancers that are more common in children than in adults are medulloblastoma and
ependymoma. Treatment and chance of recovery depend on the type of tumor, its location
within the brain, the extent to which it has spread, and the child's age and general health.

1.2.3 Image Acquisition


To Access the real medical images like MRI, PET or CT scan and to take up a research is a very
complex because of privacy issues and heavy technical hurdles. The purpose of this study is to compare
automatic brain tumor detection methods through MR brain Images. To estimate the volume a brain
tumor we have required the fig :- images in our database. Because by this three type – 1.Sagittal,
2.Axial and 3. Coronal view we can get a 3D view of brain tumor.

18
Sagittal View Axial View Coronal View

Figure 1.5 Types of views of a Brain image

1.3 Challenges for Medical Image processing


In todays health care, imaging plays an important role throughout the entire clinical process from
diagnostics and treatment planning to surgical procedures and follow up studies. It is still challenging
and difficult job to find out the conspicuous results in the medical image analysis like characterization
of abnormal masses and exact segmentation and also for position, image intensities of brain. Manually
segmenting brain tissues is generally time-consuming, irreproducible, and difficult.

Conventionally, simple thresholding or morphological techniques have been used on each image to
segment the tissue or region of interest for diagnosis, treatment planning, and follow-up of the patients.
These methods are unable to exploit all information provided by MRI. Advanced image analysis
techniques have been and still are being developed to optimally use MRI data and solve the problems
associated with previous techniques. Most of the methods presented for tumor detection and
segmentation have used several techniques and we cannot make a clear division between them but in
general, as classically done in image segmentation, we can divide the methods into three groups:
region-based, contour-based and fusion of region- and boundary-based method.

Region-based methods seek out clusters of pixels that share some measure of similarity. These methods
reduce operator interaction by automating some aspects of applying the low level operations, such as
threshold selection, histogram analysis, classification, etc. They can be supervised or non-supervised.

Boundary-based methods rely on the evolution of a curve, based on internal forces (e.g. curvature) and
external forces, such as image gradient, to delineate the boundary of brain structure or pathology. These
methods can also be supervised or nonsupervised. They can be further classified into two classes: (1)
parametric deformable model (classical snake) and (2) geometric deformable model (level sets).

The third core class of tumor segmentation methods is the fusion of region- with boundary-based
methods. This class has been the most successful, as this technique uses information from two different
sources: region and boundary. Due to its large success, it has recently received much attention.

19
1.4 Motivation and Scope
The accurate segmentation of internal structures of the brain is of great interest for the study and the
treatment of tumors. It aims at reducing the mortality and improving the surgical or radiotherapeutic
management of tumors. In brain oncology it is also desirable to have a descriptive human brain model
that can integrate tumor information extracted from MRI data such as its localization, its type, its shape,
its anatomo-functional positioning, as well as its influence on other brain structures.

Despite numerous efforts and promising results in the medical imaging community, accurate and
reproducible segmentation and characterization of abnormalities are still a challenging and difficult
task. Existing methods leave significant room for increased automation, applicability and accuracy.

Objectives and contributions In this context, the first aim of this work is to develop a framework for a
robust and accurate segmentation of a large class of brain tumors in MR images. Most existing methods
are region-based. They have several advantages, but line and edge information in computer vision
systems are also important. The proposed method tries to combine region and edge information, thus
taking advantage of both approaches while cancelling their drawbacks.

1.5 Thesis Outline


Chapter 2 gives the overview of Survey on Brain Image Processing. Chapter 3 gives the idea of
Preprocessing and Enhancement. Brain edge detection is gives in chapter 4 and brain contour estimation
in chapter 5 and conclusion is drawn in chapter 6.

20
Chapter 2
Survey on Brain Image Processing
Detecting and segmenting brain tumors in Magnetic Resonance Images (MRI) is an important but time-
consuming task performed by medical experts. Automating this process is a challenging task due to the
often high degree of intensity and textural similarity between normal areas and tumor areas. In this
portion of this paper we cover previous works to segment the brain tumors from MR images.

[Magnetic Resonance Image Enhancement Using V- Filter, 1990][1] Proposed a method to enhanced
MRI images with V-filter that is spatial nonlinear filter. Signal-to-noise ratio increase when V-filtering
approach is used to sharpening edges on MRIs. Region segmentation technique is used to extract brain
tumor and edematous boundary.

[Deformable region model for locating the boundary of brain tumors, 1995 ][2] proposed a simple
system for the segmentation of brain tumors. A new deformable region model to represent a brain
tumor, and use an shrinking-growing method to locate its boundary from a designated initial plan. The
shrinking-growing process maximizes the area of an object under the condition that the object and its
boundary have the same gray level distributions. Compared to the deformable contour models (also
called active contour model, or snake), our method does not require the initial plan to be in as close
proximity to the boundary as in contour models.

This region model based on Markov random fields theory. In this method of locating the boundary,
there are two basic operations: shrinking and growing.

Process of Locating Boundary: The process to locate the boundary of the brain tumor as follows-

1. Denote a coarse region of the tumor manually, ser A=0;


2. Shrink the region until FO=FB, and measure the region area A*;
if A*>A, then A=A*; else stop;
3. Grow the area until FO != FB and go to 2.

But in this method, it need not be so much close to the real boundary.

[Object boundary location by region and contour deformation, 1996][3] proposed the Snake method
to locate object boundary from initial plan and object boundary should be very small distance. The
method divide into two parts, combination of region and contour deformation and designated initial
boundary plan locate the object boundary. To represent an object they proposed a new deformable
region model and model used for region deformation.

[Computerized Tumor boundary detection using a Hopfield neural network, 1997][4] proposed a
computerized approach to detect brain tumor boundary using Hopfield Neural Network. Taking
advantage of the collective computational ability and energy convergence capability of the Hopfield
network, our method produces the results comparable to those of standard “snakes”-based algorithms,

21
but it requires less computing time. With the parallel processing potential of the Hopfield network, the
proposed boundary detection can be implemented for real time processing.

At first preprocess the raw MRI data using a low-pass linear filter to each slice to enhance the image.
Several slices of enhanced images from an axial data set Then an initial slice is chosen for each data set
following the processes of initial boundary detection and searching grids estimation based on
morphological operations. The final step consists of detecting the tumor boundaries using Hopfield
neural network.

The aim of our boundary detection approach is to detect the boundary of the brain tumor in each image
slice and separate the tumor from normal brain tissues. Once isolated, the detected tumor in each slice
can be further processed for volume measurement and three-dimensional (3D) rendering.

[Automatic tumor segmentation using knowledge-based techniques, 1998][5] A system that


automatically segments and labels glioblastoma-multiforme tumors in magnetic resonance images
(MRI’s) of the human brain is presented. The MRI’s consist of T1-weighted, proton density, and T2-
weighted feature images and are processed by a system which integrates Knowledge Based techniques
with multispectral analysis. Initial segmentation is performed by an unsupervised clustering algorithm.
The segmented image, along with cluster centers for each class are provided to a rule-based expert
system which extracts the intracranial region. Multispectral histogram analysis separates suspected
tumor from the rest of the intracranial region, with region analysis used in performing the final tumor
labeling. This system has been trained on three volume data sets and tested on thirteen unseen volume
data sets acquired from a single MRI system. The Knowledge Based tumor segmentation was compared
with supervised, radiologist-labeled “ground truth” tumor volumes and supervised k-nearest neighbors
tumor segmentations. The results of this system generally correspond well to ground truth, both on a per
slice basis and more importantly in tracking total tumor volume during treatment over time.

[Adaptive, Template Moderated, Spatially Varying Statistical Classification, 2000][6] a novel


algorithm was developed for automatic image segmentation of normal and abnormal anatomy from
medical images. Spatially varying statistical classification used to moderate segmentation. The
algorithm consists of an iterated sequence of spatially varying classification and nonlinear registration,
which forms an adaptive, template moderated (ATM), spatially varying statistical classification (SVC).
Classification methods and nonlinear registration methods are often complementary, both in the tasks
where they succeed and in the tasks where they fail. By integrating these approaches the new algorithm.

[Tumor-Induced Structural and Radiometric Asymmetry in Brain Images, 2001][7] proposed a


framework for analyzing radiometric asymmetry and structural asymmetry in brain images. “Mass-
effect brain tumor cause structural asymmetry b y displacing healthy tissue, and may cause radiometric
asymmetry in adjacent normal structures due to edema”. In this framework images are registered with
across to their mid-sagittal plane reflections. “The registration process accounts for tissue displacement
through large deformation images warping. Radiometric differences are taken into account through an
additive intensity field. We present an efficient multi-scale algorithm for the joint estimation of
structural and radiometric asymmetry”.

[Tracking Tumor Growth Rates in Patients with Malignant Gliomas: A Test of Two Algorithms,
2001][8] applying two 3D images analysis algorithm on MR images. Algorithms are nearest-neighbour
tissue segmentation and surface modelling. Proposed method compare two image analysis algorithms to
track tumor volume. Applying their method on patients with glioblastoma multiforme (GBM).
Enhanced volume hightly related with manually segmented volumes by Nearest-neighbor algorithm and
the growth rates measured by halving and doubling times. “Enhancement volumes generated by the

22
surface modelling algorithm were also highly correlated with the standard of reference, although growth
rates were not”.

[Model-based brain and tumor segmentation, 2002][9] proposed another approach for brain tumor
segmentation by combining image segmentation based on statistical classification with a geometric
prior has been shown to significantly increase robustness and reproducibility. Using a probabilistic
geometric model of sought structures and image registration serves both initialization of probability
density functions and definition of spatial constraints. A strong spatial prior, however, prevents
segmentation of structures that are not part of the model.

They present an extension to an existing expectation maximization segmentation (EM) algorithm that
modifies a probabilistic brain atlas with individual subject's information about tumor location. This
information is obtained from subtraction of post- and pre-contrast MRI and calculation of a posterior
probability map for tumor. The new method handles both phenomena, space occupying mass tumors
and infiltrating changes like edema.

Combined elastic atlas registration with statistical classification. Elastic registration of a brain atlas
helped to mask the brain from surrounding structures. A further step uses distance from brain boundary
as an additional feature to improve separation of clusters in multi-dimensional feature space.
Initialization of probability density functions still requires a supervised selection of training regions.
The core idea, namely to augment statistical classification with spatial information to account for the
overlap of distributions in intensity feature space.

This method has been shown to be very robust and highly reproducible for normal brain images, but
fails in the presence of large pathology. A most recent extension detects brain lesions as outliers and
was successfully applied for detection of multiple sclerosis lesions. Brain tumors, however, can't be
simply modeled as intensity outliers due to overlapping intensities with normal tissue and/or significant
size.

[Estimation of tumor volume with fuzzy-connectedness segmentation of MR images, American


Journal of Neuroradiology, 2002][10] purpose was to adapt the fuzzy connectedness segmentation
technique to measure tumor volume. This technique requires only limited operator interaction.
Segmentation was performed on axial and coronal gadolinium-enhanced and axial fluid-attenuated
inversion recovery (FLAIR) images by using a fuzzy-connectedness algorithm, and tumor volumes
were generated. Operator interaction was limited to selecting representative seed points within the
tumor and, if necessary, editing the segmented image to include or exclude improperly classified
regions

Measurements of tumor volume were highly reproducible when they were obtained with no editing;
intraobserver coefficients of variation were 0.15–0.37% and 0.29–0.38%, respectively, for enhanced
images and FLAIR images. Editing consistently produced smaller volumes, at the cost of greater
variability in volume measurements. Coefficients of variation for volumes with editing ranged from
0.2% to 1.3%.

Fuzzy-connected segmentation permits rapid, reliable, consistent and highly reproducible measurement
of tumor volume from MR images with limited operator interaction.

[Fast robust automated brain extraction, Human Brain Mapping, 2002][11] An automated method
for segmenting magnetic resonance head images into brain and nonbrain has been developed. It is very
robust and accurate and has been tested on thousands of data sets from a wide variety of scanners and

23
taken with a wide variety of MR sequences. The method, Brain Extraction Tool (BET), uses a
deformable model that evolves to fit the brain’s surface by the application of a set of locally adaptive
model forces. The method is very fast and requires no preregistration or other pre-processing before
being applied. It describe the new method and give examples of results and the results of extensive
quantitative testing against “gold-standard” hand segmentations, and two other popular automated
methods. Hum. Brain Mapping

There have been three main existing methods proposed for achieving brain/non-brain segmentation
1.Manual, 2.Thresholding-with-morphology, and 3.Surfacemodel- based.

The brain extraction method they proposed is that, the intensity histogram is processed to find “robust”
lower and upper intensity values for the image, and a rough brain/non-brain threshold. The centre-of
gravity of the head image is found, along with the rough size of the head in the image. A triangular
tessellation of a sphere’s surface is initialized inside the brain and allowed to slowly deform, one vertex
at a time, following forces that keep the surface well spaced and smooth, while attempting to move
toward the brain’s edge. If a suitably clean solution is not arrived at then this process is re-run with a
higher smoothness constraint. Finally, if required, the outer surface of the skull is estimated.

[Level set evolution with region competition: automatic 3D segmentation of brain tumors,
2002][12] This proposed work discusses the development of a new method for the automatic
segmentation of anatomical structures from volumetric medical images. Driving application is the
segmentation of 3-D tumor structures from magnetic resonance images (MRI). Level set evolution
combining global smoothness with the flexibility of topology changes offers significant advantages over
conventional statistical classification followed by mathematical morphology. Level set evolution with
constant propagation needs to be initialized either completely inside or outside and can leak through
weak or missing boundary parts. Replacing the constant propagation term by a signed local statistical
force overcomes these limitations and results in a region competition method that converges to a stable
solution.

Tumor segmentation procedure starts with an intensity-based fuzzy classification of voxels into tumor
and background classes.

[A Fast Deformable Region Model for Brain Tumor Boundary Extraction, 2002][13] proposed
deformable region model to find out brain tuomor from 2D brain MRI. This is an modified model and it
gives similar results in fast. Shrinking-growing method used to find maximum area by some gray level
distribution. In their modified model time to take on finding brain tumor boundary is reduced.

[Automatic Brain and Tumor Segmentation, 2002][14] This method increase robustness and
reproducibility significantly with the combination of geometric prior statistical classification and image
segmentation. Probabilistic geometric model and image registration both are used to initialize definition
of spatial constraints and probability density function. This application finds out brain tumor and
segmented brain tissues from 3D MRI. Expectation maximization (EM) segmentation algorithm is
modifies to find tumor location from subtraction of post- and pre-contrast MRI. “The new method
handles various types of pathology, space occupying mass tumors and infiltrating changes like edema.
Preliminary results on five cases presenting tumor types with very different characteristics demonstrate
the potential of the new technique for clinical routine use for planning and monitoring in neurosurgery,
radiation oncology, and radiology.”

[Automated Segmentation of Brain Structure from MRI, 2003][15] An automated technique to find
out brain tumor from MRI they proposed. In their proposed method they introduced a new algorithm for

24
thresolding, fuzzy clustering, filtering edges and morphological operations. It overcomes unclear edge
detection and also overcomes the low contrast tissue detection limitations. This automatic method based
on dynamic contour based models and it gives quite similar results like manual segmentation of
radiologist do.

[Segmentation-Based Multilayer Diagnosis Lossless Medical Image Compression, 2003] [16]


proposed a segmentation scheme for lossless medical image compression i.e. segmentation-based
multilayer (SML). Region of interest (ROI) find out automatically by unseeded region growing (URG)
and barrows-Wheeler coding (BWC) and wavelet-based JPEG2000 coding used for multilayer lossless
ROI compression.

[Fuzzy Connectedness and Image Segmentation, 2003][17] proposed method used fuzzy
connectedness framework to segment brain tumor from MRI. “The fuzzy connectedness framework
aims at capturing this notion via a fuzzy topological notion called fuzzy connectedness which defines
how the image elements hang together spatially in spite of their gradation of intensities. In defining
objects in a given image, the strength of connectedness between every pair of image elements is
considered, which in turn is determined by considering all possible connecting paths between the pair.
In spite of a high combinatorial complexity, theoretical advances in fuzzy connectedness have made it
possible to delineate objects via dynamic programming at close to interactive speeds on modern PCs”.

[Brain Tumor Boundary Detection in MR Image with Generalized Fuzzy Operator, 2003] [18]
proposed a new approach to overcome the difficulty to find fine edge and accurate boundary of brain
from MRI. The new method Generalized Fuzzy Operator (GFO) which is based on contour deformable
model. For 3D reconstruction it can use because it is effective to find boundary and it is simple.

[Automatic Brain Tumor Segmentation by Subject Specific Modification of Atlas Priors, 2003] [19]
proposed a automated system to detect brain tumor closed to manual segmentation results and also find
gray matter, white matter, edema and cerebrospinal fluid. “The segmentation is guided by a spatial
probabilistic atlas that contains expert prior knowledge about brain structures. This atlas is modified
with the subject specific brain tumor prior that is computed based on contrast enhancement.”

[Robust Estimation for Brain Tumor Segmentation, 2003][20] proposed a method divided into two
parts one for segmentation of brain tumor and another for edama. Using robust estimation detect
intensity outliers of that location and then detect abnormalities by applying geometric and spatial
constraints. It is an efficient, automatic method to segment tumor and edama both.

[A hierarchical decision tree classification scheme for brain tumour astrocytoma grading using
support vector machines, 2003][21] Degree of malignancy of brain tumors astrocytomas (ASTs) are
characterized by decision tree (DT) and support vector machine (SVM) in their proposed method. “A
two-level hierarchical DT model was constructed for the discrimination of 87 ASTs in accordance to
the WHO grading system. The first level concerned the detection of low versus high-grade tumors and
the second level the detection of less aggressive as opposed to highly aggressive tumors. The decision
rule at each level was based on a SVM classification methodology comprising 3 steps: i) From each
biopsy, images were digitized and segmented to isolate nuclei from surrounding tissue. ii) Descriptive
quantitative variables related to chromatin distribution and DNA content were generated to encode the
degree of tumor malignancy. iii) Exhaustive search was performed to determine best feature
combination that led to the smallest classification error. SVM classifier training was based on the leave-
one-out method”. SVMs were evaluated in their method with the Bayesian classifier and the

25
probabilistic neural network.” The SVM classifier discriminated low from high-grade tumors with an
accuracy of 90.8% and less from highly aggressive tumors with 85.6%”.

[GPU-based level sets for 3D brain tumor segmentation, 2003][22] in this proposed work presents a
tool for 3D segmentation that relies on level-set surface models computed at interactive rates on
commodity graphics cards (GPUs). their usefulness has been limited by two problems. First, 3D level
sets are relatively slow to compute. Second, their formulation usually entails several free parameters
which can be very difficult to correctly tune for specific applications. The second problem is
compounded by the first.

This method makes the following contributions:

 A 3D segmentation tool that uses a new level-set deformation solver to achieve interactive rates
(approximately 15 times faster than previous solutions).
 A mapping of the sparse, level-set computation to a GPU, a new numerical scheme for retaining
a thin band structure in the solution, and a novel technique for dynamic memory management
between the CPU and GPU.
 Quantitative and qualitative evidence that interactive level-set models are effective for brain
tumor segmentation.

[Evaluation of the symmetry plane in 3D MR brain images, 2003][23] propose a method for detecting
the brain tumor using symmetry plane in 3D MR brain images. They express this problem as a
registration problem and compute a degree of similarity between the image and its reflection with
respect to a plane. The best plane is then obtained by maximizing the similarity measure. This
optimization is performed using the downhill simplex method and is initialized by a plane obtained
from principal inertia axes, which proves to be close to the global optimum. This is demonstrated on
several MR brain images. The proposed algorithm is then successfully tested on simulated and real 3D
MR brain images. It also investigated the influence of the optimization procedure control parameters on
the computation speed and result precision.

A two step algorithm for computation of the symmetry plane in 3D brain images is proposed in 2000. It
is assumed that some initial plane is already given. This initial plane can be either the middle plane of
the image, generally a good starting point for the optimization procedure, or the plane computed based
on principal inertia axes, if the brain is too tilted. The algorithm allows automatically to re-orient and
re-center this plane. At the first step a point-to-point correspondence is established between two
hemispheres using the demons algorithm. Then this correspondence is used for finding the new position
of the plane minimizing the least squares criterion.

The authors propose also a method to estimate the 3D dissymmetry field and apply it to the following
problems: The study of the normal dissymmetry within a given population; the comparison of the
dissymmetry between two populations; the detection of the significant abnormal dissymmetries of a
patient with respect to a reference population. An improved version of the symmetry plane computation
algorithm is presented in 2002. In the first step, the demons algorithm is replaced with a block
matching. In the second step, the authors use a robust least trimmed squares criterion. Finally, the whole
process is iterated. As it is reported the algorithm is efficient and achieves a good accuracy for
anatomical and functional images. The method also works for pathological, highly asymmetrical brains.

[Data Driven Brain Tumor Segmentation in MRI Using Probabilistic Reasoning over Space and
Time, 2004][24] proposed method used pipeline approach to segment brain tumor and tracking volume
of the tumor from MRI. The system is not fully automated. It is a semi automated approach but the

26
alignment and de-skulling of all brain MRI is automatic. But in case of segmentation and volume
tracking, semi automatic correction required initially and it use probabilistic reasoning over space and
time. This method give good result with volume in 3D manully.

[Computer Vision Segmentation of Brain Structures from MRI Based on Deformable Contour
Models, 2004][25] discrete dynamic contour model is used in their proposed method to find out brain
boundary structures from MRI. For low contrast image this method is useful and also useful to find
discontinuous boundary. Both internal forces and external forces are used to obtained vertices,
connecting adjacent vertices and edges and image energy respectively. Fuzzy C-means (FCM), Prewitt
filter, and morphological operations, thresolding uses in a new algorithm to find clear edge and good
result from low contrast image.

[MRSI Brain TUMOR characterization using wavelet and wavelet packets feature Spaces and
artificial neural networks, 2004][26] this proposed method used artificial neural network on Magnatic
Resonance spectroscopic imaging(MRSI). MRSI applications and the reason of development is to
differetiate abnormal and normal tissues and typle of abnormality to be determine before surgery.
Extracting features from MRSI data using wavelet and wavelets packets. This MRSI data is used in
their proposed methode to evalute. This paper analysis of the three tumor types- Malignant Glioma,
Astrocytoma and Oligodendroglioma. Wavelets approach used in case of both noise reduction and
feature extraction nut in feature extraction method need to be optimized to improve to get accurate of
their approach.

[A brain tumor segmentation framework based on outlier detection, 2004][27] This paper describes a
framework for automatic brain tumor segmentation from MR images. The detection of edema is done
simultaneously with tumor segmentation, as the knowledge of the extent of edema is important for
diagnosis, planning, and treatment. Whereas many other tumor segmentation methods rely on the
intensity enhancement produced by the gadolinium contrast agent in the T1-weighted image, the
method proposed here does not require contrast enhanced image channels. The only required input for
the segmentation procedure is the T2 MR image channel, but it can make use of any additional non-
enhanced image channels for improved tissue segmentation. The segmentation framework is composed
of three stages. First, detect abnormal regions using a registered brain atlas as a model for healthy
brains, then make use of the robust estimates of the location and dispersion of the normal brain tissue
intensity clusters to determine the intensity properties of the different tissue types. In the second stage,
we determine from the T2 image intensities whether edema appears together with tumor in the
abnormal regions. Finally, apply geometric and spatial constraints to the detected tumor and edema
regions. The segmentation procedure has been applied to three real datasets, representing different
tumor shapes, locations, sizes, image intensities, and enhancement.

[Evidential segmentation scheme of multi-echo MR images for the detection of brain tumors using
neighborhood information, Information Fusion, 2004][28] proposed method presents an evidential
segmentation scheme of multi-echoes magnetic resonance (MR) images for the detection of brain
tumors. The segmentation is based on the modeling of the data by evidence theory which is well suited
to represent such uncertain and imprecise data. In our approach, the neighborhood relationship between
the voxels are taken into account thanks to a weighted Dempster’s combination rule. This process leads
to a real region-based segmentation of brain and allows the detection of tumors. In this paper they
particularly focus on the conflicting information which is generated when combining neighborhood
information. It show this conflict reflects the spatial organization of the data: it is higher at the boundary
between the different structures. They propose and define a boundary indicator

based on the amount of conflict. This indicator is then used as new source of evidence that the specialist
can aggregate with the segmentation results to soften its decision.

27
Evidence theory, or theory of belief structures, was initially introduced by Dempster’s works on the
concepts of lower and upper bounds for a set of compatible probability distributions. In Shafer
formalized the theory and showed the advantage of using belief structures to model imprecise and
uncertain data. Different interpretations of the native “Dempster-Shafer” theory successively appeared.
Smets and Kennes deviate from the initial probabilistic interpretation of the evidence theory with the
Transferable Belief Model (TBM) giving a clear and coherent interpretation of the underlying concept
of the theory.

[Atlas-based segmentation of pathological MR brain images using a model of lesion growth,


2004][29] propose a method for brain atlas deformation in the presence of large space-occupying tumors,
based on an apriori model of lesion growth that assumes radial expansion of the lesion from its starting
point.

This approach involves three steps. First, an affine registration brings the atlas and the patient into
global correspondence. Then, the seeding of a synthetic tumor into the brain atlas provides a template
for the lesion. The last step is the deformation of the seeded atlas, combining a method derived from
optical flow principles and a model of lesion growth.

Results show that a good registration is performed and that the method can be applied to automatic
segmentation of structures and substructures in brains with gross deformation, with important medical
applications in neurosurgery, radiosurgery, and radiotherapy.

[Synthetic Ground Truth for Validation of Brain Tumor MRI Segmentation, 2005][30] A new
method was proposed to generate synthetic multi-modal brain tumor and edama from 3D MRI.
Biomechanical model used when tumor and edema infiltration is modeled as a reaction-diffusion
process that is guided by a modified diffusion tensor MRI. They propose the use of warping and
geodesic interpolation on the diffusion tensors to simulate the displacement and the destruction of the
white matter fibers and also model the process where the contrast agent tends to accumulate in cortical
csf regions and active tumor regions to obtain contrast enhanced T1w MRI. The result is simulated
multi-modal MRI with ground truth available as sets of probability maps. The system will be able to
generate large sets of simulation images with tumors of varying size, shape and location, and will
additionally generate infiltrated and deformed healthy tissue probabilities.

[Segmenting brain tumors using alignment-based features, 2005][31] This work quantitatively
evaluates the performance of 4 different types of Alignment-Based features encoding spatial anatomic
information for use in supervised pixel classification. This is the first work to (1) compare several types
of Alignment-Based features, (2) explore ways to combine different types of Alignment-Based features,
and (3) explore combining Alignment-Based features with textural features in a learning framework.
We considered situations where existing methods perform poorly, and found that combining textural
and Alignment-Based features allows a substantial performance increase, achieving segmentations that
very closely resemble expert annotations.

Method: 1. Preprocessing, 2. Feature Extraction, 3.Segmentation

[Statistical Validation of Brain Tumor Shape Approximation via Spherical Harmonics for Image
Guided Neurosurgery, 2005][32] objectives of their method is to use both 2D and 3D dimensional
model and data is to be integrated by multiple imaging modalities and the 3D shape and volume
estimation of brain tumor approximating by Spherical harmonics (SH). Then they conclude that “3D
shapes of tumors may be approximated by using SH for neurosurgical applications”.

28
[3D brain tumor segmentation using fuzzy classification and deformable models, 2005] [33] They
propose a method that is a combination of region-based and contour-based paradigms. It works in 3D
and on standard routine T1-weighted acquisitions. First of all they segment the brain to remove non-
brain data (skull, fat, skin, muscle) from the image. However, in pathological cases, standard
segmentation methods fail, in particular when the tumor is located very close to the brain surface.
Therefore they propose an improved segmentation method, relying on the approximate symmetry plane.
To provide an initial detection of the tumor they propose two methods. The first one is a fuzzy
classification method that is applicable to hyper-intense tumors while the second one is based on
symmetry analysis and applies to any type of tumor. In this method we first calculate the approximate
symmetry plane and then symmetry analysis is performed to determine the regions that deviate from the
symmetry assumption. The aim of the detection approach is to roughly locate the tumor. This does not
provide an accurate estimation of its boundaries and we therefore propose a refinement step. This is
achieved through a parametric deformable model constrained by spatial relations.

[Extraction of brain tumor from MR images using one-class support vector machine, 2005][34] A
image segmentation approach by exploring one-class support vector machine (SVM) has been
developed for the extraction of brain tumor from magnetic resonance (MR) images. Based on one-class
SVM, the proposed method has the ability of learning the nonlinear distribution of the image data
without prior knowledge, via the automatic procedure of SVM parameters training and an implicit
learning kernel. After the learning process, the segmentation task is performed. Proposed technique is
applied to 24 clinical MR images of brain tumor for both visual and quantitative evaluations.
Experimental results suggest that the proposed query-based approach provides an effective and
promising method for brain tumor extraction from MR images with high accuracy

In the segmentation framework, the user is only needed to feed the one-class SVM classifier with a
chosen image sample over the tumor areas as the query for segmentation performing. Then, this
approach can automatically learn the nonlinear tumor data distribution without additional prior
knowledge and optimally turn out a flexible decision boundary for the tumor region. The final
segmentation results can be obtained after region analysis. Note that SVM generalizes well in high-
dimensional spaces and feature extraction can be automatically done in the training stage of SVM.
Hence no specific feature extraction approach is required in this method.

[Multilevel Segmentation and Integrated Bayesian Model Classification with an Application to


Brain Tumor Segmentation, 2006][35] in this approach heterogeneous images data is used to segment
in automatic way. Weighted aggregation algorithm used to incorporating resulting model-aware
affinities with multilevel segmentation. This technique benefited by integrating model-aware affinities
for segmentation of brain tumor and edama.

[Hybrid Deformable Models for Medical Segmentation and Registration, 2006] [36] hybrid
deformable model is introduce in their paper. This automatic model gives better result on segmentation
and registration. Several hybrid models are take part to do segmentation and registration. Hybrid
deformable model integrating region information and doing better in case of registration and
segmentation.
[Content Based Image Retrieval for MR Image Studies of Brain Tumors, 2006][37] in this proposed
work content based image retrieval method used to detect glioblastoma multiforme (GBM) and non-
GBM tumors.

29
[Integration of fuzzy spatial relations in deformable models—application to brain MRI
segmentation, 2006][38] This proposal presents a general framework to integrate a new type of
constraints, based on spatial relations, in deformable models. In the proposed approach, spatial relations
are represented as fuzzy subsets of the image space and incorporated in the deformable model as a new
external force. Three methods to construct an external force from a fuzzy set representing a spatial
relation are introduced and discussed. This framework is then used to segment brain subcortical
structures in magnetic resonance images (MRI). A training step is proposed to estimate the main
parameters defining the relations. The results demonstrate that the introduction of spatial relations in a
deformable model can substantially improve the segmentation of structures with low contrast and ill-
defined boundaries

[Brain Tumor Detection in MRI: Technique and Statistical Validation, 2006][39] Proposed Two
fractal based technique to segmented brain tumor and classify it from MRI, piecewise-triangular-prism-
surface-area (PTPSA) algorithm is one of the technique for fractal feature extraction and another is
fractional Brownian motion (FBM) framework. Above mentioned technique combines both fractal and
wavelet analyses for fractal wavelet feature extraction. “Three MRI modalities such as Tl (gadolinium-
enhanced), T2 and fluid-attenuated inversion-recovery (FLAIR) are exploited in this work. The self-
organizing map (SOM) algorithm is used for tumor segmentation”.

[Multi-modal image set registration and atlas formation, 2006][40] this method proposed Bayesian
framework between two multi-model sets of images of brain and for multi-class brain atlas formation.
“In this framework, the estimated transformations are generated using maximal information about the
underlying neuroanatomy present in each of the different modalities. This modality independent
registration framework is achieved by jointly estimating the posterior probabilities associated with the
multi-modal image sets and the high-dimensional registration transformations mapping these
posteriors”. Adult human brain is presented as a result by composing of multi-model MR images
Registration and for atlas formation results, five infant human brains population are presented.

[Interactive, GPU-Based Level Sets for 3D Segmentation, 2007][41] An interactive approach is used in
this proposed method. It is GPU-based approach, here levels are set for segmentation of brain tumor
from MRI. A 3D view produced of tumor with more computing time and the free parameters are tuned
but not correctly for specific applications. Because of 3D level set technique is tool based and it has two
limitations describes previous sentenced i.e. slow computing and difficult to correctly tune the
parameters.

[Automatic brain tumor segmentation using symmetry analysis and deformable models, 2007] [42]
They propose a new general automatic method for segmenting brain tumors in 3D MRI. Method is
applicable to different types of tumors. A first detection process is based on selecting asymmetric areas
with respect to the approximate brain symmetry plane. Its result constitutes the initialization of a
segmentation method based on a combination of a deformable model and spatial relations, leading to a
precise segmentation of the tumors. The results obtained on different types of tumors have been
evaluated by comparison with manual segmentations.

Existing methods are classically divided into region based and contour based methods, and are usually
dedicated to full enhanced tumors or specific types of tumors. In the first class, Clark (Transactions on
Medical Imaging) have proposed a method for tumor segmentation using knowledge based and fuzzy
classification, where a learning process prior to segmenting a set of images is necessary. Other methods
are based on statistical pattern recognition techniques. These methods fail in the case of large
deformations in the brain. Existing contour based methods are not fully automatic and need some

30
manual operation for initialization. Lefohn have proposed a semi-automatic method using level sets.
Another segmentation method based on level sets was introduced by Ho that uses T1-weighted images
both with and without contrast agent for tumor detection. The method by deformable model and neural
network was introduced by Zhu and Yang that processes the image slice by slice and is not a real 3D
method. In their paper they introduce a fully automatic method for the segmentation of different types
of tumors in 3D MRI, based on a combination of region based and contour based methods. In the first
step, they use the mid sagittal approximate symmetry plane and detect tumors as an asymmetry with
respect to this plane. In the second step, a precise segmentation is obtained using an original
combination of deformable models and spatial relations.

[A Combined MRI and MRSI based Multiclass System for brain tumor recognition using LS-
SVMs with class probabilities and feature selection, 2007][43] in their paper they extend previous
work that only uses binary classifiers to assess the type and grade of a tumor to a multiclass
classification system obtaining class probabilities. The important problem of input feature selection is
also addressed in their method. In their method they said “Least squares support vector machines (LS-
SVMs) with radial basis function kernel are applied and compared with linear discriminant analysis
(LDA). Both a Bayesian framework and cross-validation are used to infer the parameters of the LS-
SVM classifiers. Four different techniques to obtain multiclass probabilities as a measure of accuracy
are compared. Four variable selection methods are explored. MRI and MRSI data are selected from the
INTERPRET project database and the multiclass classifier system can be of great help in the diagnosis
of brain tumors”.

[Probabilistic Segmentation of brain tumors Based on Multi-Modality Magnetic Resonance


Images, 2007][44] in this proposed method they integrating multi-modal Magnetic Resonance Images.
Tumor, edama and tissues are differentiating with their methode from MRI. By imaging characteristic
of tumors and surrounding tissues they produce a probabilistic tissue maps. “The main contributions of
this work are: 1) conventional structural MR modalities are combined with diffusion tensor imaging
data to create an integrated multimodality profile for brain tumors, and 2) in addition to the tumor
components of enhancing and non-enhancing tumor types, edema is also characterized as a separate
class in our framework. Classification performance is tested on 22 diverse tumor cases using cross-
validation”.

[Automatic Segmentation of Neonatal Brain MRI, 2009][45] in this method they concentrated on
neonatal MRI. In their method a registered probabilistic technique for brain atlas is use to select
training samples and to be used as a spatial prior. Initially intensity distributions are estimated by their
method and graph clustering also take part of it then bias correction perform by using graph clustering
and robust estimation together. After that sample pruning and non-parametric density estimation used
for segmentation. myelination regions, nonmyelinated regions and brain structure are segmented by
this technique.

[An Improved Implementation of Brain Tumor Detection Using Segmentation Based on


Hierarchical Self Organizing Map, 2010][46] This method describes segmentation method consisting
of two phases. In the first phase, the MRI brain image is acquired from patients’ database, In that film,
artifact and noise are removed after that HSom is applied for image segmentation. The HSom is the
extension of the conventional self organizing map used to classify the image row by row. In this lowest
level of weight vector, a higher value of tumor pixels, computation speed is achieved by the HSom with
vector quantization. In their paper, a new unsupervised learning Optimization algorithm such as SOM is
implemented to extract the suspicious region in the Segmentation of MRI Brain tumor. The textural
features can be extracted from the suspicious region to classify them into benign or malign.

31
[3D variational brain tumor segmentation using Dirichlet priors on a clustered feature set, 2011][47]
in their proposed method they used clustering technique to find out brain tumor. Automatically find out
brain tumor ia a challenging task and in their approach, find out 3D variational segmentation in
automatically. Differentiate brain tumor and brain tissue well. This approach take part on fifteen brain
tumor patients MRI scans and the tumors are inhomogeneous will small size.

[Segmentation and Identification of Brain Tumor MRI Image with Radix4 FFT Techniques,
2011][48] in their work Radix4 FFT Techniques are used to Segmentation and Identification of Brain
Tumor of MRI Image. A radix-4 FFT recursively partitions a DFT into four quarter-length DFTs of
groups of every fourth time sample. The total computational cost reduced by these shorter FFTs outputs
which are reused for computing the output. The paper discussed on performance of radix4 FFT on the
intensity signal of MRI brain image. Types of radix 4FFT like zero padded FFT, windowed FFT, and
windowed Zero-padded FFT are used here to study about the spectral leakage, presence of tumor cells
spread into the nearby regions. Therefore the tumor characteristics and intensity levels of tumor can be
studied. It is found that windowed zero-padded radix 4 FFT method has higher amplitude signal and
less spectral leakage. This study can be further used to analyze the characteristics of the brain tumor.

[Semi Automated Tumor Segmentation from MRI Images Using Local Statistics Based Adaptive
Region Growing, 2012][49] developed an algorithm by modifying the existing Region Growing (RG)
algorithm, by considering the local statistics of the pixels along with Pixel Run Length (PRL)
parameter. PRL based Adaptive Region Growing (ARG) algorithm gave satisfactory result with good
level of accuracy. The segmented tumor is quantified by area, perimeter and form factor, which in turn
helps us to classify the different shape and contour of tumor. This algorithm is a semi - automated
method and it will help the radiologist and neurologist to perform the diagnosis more effectively and
accurately.

The input image used for the algorithm is shown in Fig.1. The qualitative assessment of the image
shows that there is a difference of gray level intensities, inside and outside the tumor region. This
intensity difference is used in developing algorithm for RG technique. RG technique is the one in which
one pixel is chosen randomly inside tumor region. This pixel is called as seed point [6]. Then the region
start growing from seed point in all directions till the intensity of the pixels is approximately same as
average intensity of tumor region.

Usually boundary pixels are of high intensity so the region stops growing at the boundary pixels. There
may be few pixels inside tumor region where intensity is same as intensity outside tumor region. PRL-
ARG algorithm should be robust to all such unfavorable conditions.

[Detection and Quantification of brain tumor from MRI of Brain and its symmetric analysis,
2012][50] proposed an automatic brain tumor segment and area detection technique using symmetric
analysis. Median filter Used to enhance the image, then convert the image binary image using
thresolding method and the normalized intensity value between 0 and 1. The use of otsu’s method,
which chooses threshold to minimize the intraclass variance of the black and white pixels. Then
segment the threshold image by watershed segmentation because It is the best method to segment an
image to separate a tumor but it suffers from over and under segmentation. For better result
morphological operations are applied on the image after converting it into binary form. Strel, imerode
and imdilate, Imerode: It is used to erode an image. Imdilate: It is used to dilate an image. Marge these
morphological outputs with grayscale image. Then we make the resultant image with sharp location of
tumor by morphological output image and gray image.

32
This Section surveyed the methods of several segmentation techniques to detect brain tumor from MRI.
Techniques are automatic in nature most of the time. In this section maximum methods are only detect
the brain but few of them are looking to find the volume of the tumor from MRI. They are trying to
detect the volume of the tumor from MRI in an automatic way.

This surveyed existing methods for brain tumor segmentation in Medical Resonance Images. Existing
methods are reviewed based on mainly 2D segmentation and few of them for 3D. Lots of different
techniques are used in earlier days like Neural Network, Soft Computing etc. To obtained good result
they must be well initialized and the goal is, system should be automatic in nature.

33
Chapter 3

Preprocessing
Pre-processing methods use a small neighborhood of a pixel in an input image to get a new brightness
value in the output image. Such pre-processing operations are also called filtration.

Pre-processing methods can be divided into the two groups according to the goal of the processing:

 Smoothing suppresses noise or other small fluctuations in the image;


equivalent to the suppression of high frequencies in the frequency domain. Unfortunately,
smoothing also blurs all sharp edges that bear important information about the image.

 Gradient operators are based on local derivatives of the image function. Derivatives are
bigger at locations of the image where the image function undergoes rapid changes. The aim of
gradient operators is to indicate such locations in the image. Gradient operators suppress low
frequencies in the frequency domain (i.e. they act as high-pass filters). Noise is often high
frequency in nature; unfortunately, if a gradient operator is applied to an image the noise level
increases simultaneously.

 Clearly, smoothing and gradient operators have conflicting aims. Some pre-processing
algorithms solve this problem and permit smoothing and edge
enhancement simultaneously.

Another classification of pre-processing methods is according to the transformation properties.

 Linear operations calculate the resulting value in the output image pixel g(i,j) as a linear
combination of brightnesses in a local neighborhood of the pixel f(i,j) in the input image. The
contribution of the pixels in the neighborhood is weighted by coefficients h.

The above equation is equivalent to discrete convolution with the kernel h, that is called a
convolution mask.

 Rectangular neighborhoods O are often used with an odd number of pixels in rows and
columns, enabling the specification of the central pixel of the neighborhood.

 Pre-processing methods typically use very little a priori knowledge about the image contents. It
is very difficult to infer this knowledge while an image is processed as the known
neighborhood O of the processed pixel is small.

34
 The choice of the local transformation, size, and shape of the neighborhood O depends strongly
on the size of objects in the processed image. If objects are rather large, an image can be
enhanced by smoothing of small degradations.

 Image Smoothing: Image smoothing is the set of preprocessing methods which have the aim of
suppressing image noise – it uses reduncdency in the image data. Calculation of the new value
is based on averaging of brightness values in some neighbourhood O. Smoothing poses the
problem of blurring sharp edges in the image, and so we shall concentrate on smoothing
methods which are edge preserving. They are based on the general idea that the average is
computed only from those points in the neighbourhood which have similar properties to the
processed point. Image smoothing can effectively eliminate impulsive noise or degradation
appearing as thin stripes, but does not work if degradations are large blobs or thick stripes.

 Gaussian Smoothing

The Gaussian smoothing operator is a 2-D convolution operator that is used to `blur' images
and remove detail and noise. In this sense it is similar to the mean filter, but it uses a
different kernel that represents the shape of a Gaussian (`bell-shaped') hump. This kernel has
some special properties which are detailed below.

How It Works
The Gaussian distribution in 1-D has the form:

where σ is the standard deviation of the distribution. We have also assumed that the
distribution has a mean of zero (i.e. it is centered on the line x=0). The distribution is
illustrated in Figure 1.

Figure 3.1 Gaussian distribution with mean 0 and σ =1

In 2-D, an isotropic (i.e. circularly symmetric) Gaussian has the form:

This distribution is shown in Figure 3.2.

35
Figure 3.2 Gaussian distribution with mean (0,0) and σ =1

The idea of Gaussian smoothing is to use this 2-D distribution as a `point-spread' function,
and this is achieved by convolution. Since the image is stored as a collection of discrete
pixels we need to produce a discrete approximation to the Gaussian function before we can
perform the convolution. In theory, the Gaussian distribution is non-zero everywhere, which
would require an infinitely large convolution kernel, but in practice it is effectively zero
more than about three standard deviations from the mean, and so we can truncate the kernel
at this point. Once we obtain a suitable kernel, then the Gaussian smoothing can be
performed using standard convolution methods.

In this method we have taken 7×7 kernel as a convolution filter. Close observation of all the
brain images reveal that each category of brain displayed a varied intensity value which is
distinct from each category. This property of the brains has helped us in our choice of the
value of deviation (Ω) for each category, thus able to adjust the level of smoothening for
each category.

Before Smoothing After Smoothing

Figure 3.3 Gaussian Smoothing

36
Guidelines for Use Gaussian

The effect of Gaussian smoothing is to blur an image, in a similar fashion to the mean filter. The degree
of smoothing is determined by the standard deviation of the Gaussian. (Larger standard deviation
Gaussians, of course, require larger convolution kernels in order to be accurately represented.)

The Gaussian outputs a `weighted average' of each pixel's neighborhood, with the average weighted
more towards the value of the central pixels. This is in contrast to the mean filter's uniformly weighted
average. Because of this, a Gaussian provides gentler smoothing and preserves edges better than a
similarly sized mean filter.

One of the principle justifications for using the Gaussian as a smoothing filter is due to its frequency
response. Most convolution-based smoothing filters act as lowpass frequency filters. This means that
their effect is to remove high spatial frequency components from an image. The frequency response of a
convolution filter, i.e. its effect on different spatial frequencies, can be seen by taking the Fourier
transform of the filter. Figure 5 shows the frequency responses of a 1-D mean filter with width 5 and
also of a Gaussian filter with σ = 3.

Step 1. Calculate Gaussian Kernel values to create a convolution filter.

Step 2. Convolution of the Brain image starting from Left Topmost pixel to Bottom Rightmost
pixel.

Step 3. Perform matrix multiplication between calculated Gaussian filter values and pixel intensity.

 Image Enhancement

Image enhancement is the process of improving the quality of a digitally stored image by
manipulating the image and the processing of externally derived information-bearing images by
algorithms such as time compression, filtering, extraction, selection, correlation, convolution or
transformations between domains (e.g., fast Fourier transform or Walsh transform). This does
not include algorithms using only linear or rotational transformation of a single image, such as
translation, feature extraction, registration or false coloration.

Here we have used a concept of Binary Homogeneity Enhancement Algorithm[51] for digital
brain image. According to this concept we take a threshold value (factor) i.e. the maximum
difference between two pixel , which is constant threshold determine by observation. We start
checking this value with the image data by horizontally scanning from left of the array to the
right from the topmost row of the brain image. If result of any subtraction is greater than the
factor, the array will be divided into two equal subsets along middle position and the first and
last positions of the two subsets will be pushed to stack. Otherwise, the mode value of subset
will be propagated to all other position after modifying value using uniform color quantization
technique in color space breaking it into sixteen level scales. The process will be continued
recursively, checking the stack, till the rightmost pixel in the last row of the brain image is
reached. The same process will be repeated by scanning the image vertically from top to bottom
followed by uniform color quantization. Algorithm for Horizontal Processing:

Step 1. Scan Image from first row from left most pixel to right most pixel.

Step 2. Check if the value of pixel is : MDT-val <pixel intensity<MDT+val


.

37
Step 3. If value differs then divide the array from start to end through the midpoint. Push the
Start and End Position of both halves into the Stack and Goto Step2.

Step 4. If value of consecutive pixels satisfy the condition then continue till the value differs.
Then find Mode value of all the pixels of similar intensity and replace them in the
image after performing uniform color quantization for the set of pixels.

Step 5. Check Stack. If not empty, Pop value from Stack and Goto Step2.

Step 6. Continue till rightmost pixel of the Last row is reached. Algorithm for Vertical pro-
cessing: The same process, as mentioned above, by replacing row wise scanning
with column wise scanning starting from first column from top most pixel to bottom
most pixel and continuing till bottom most pixel of the last column is reached.

before enhancement after enhancement

Figure 3.4 Image Enhancement

38
Chapter 4
Brain Edge Detection
4.1 Overview
In this method for detecting edge rather than using one of the known methods of edge detection[51]. In
the first step of this method we perform horizontal scanning. If any change of pixel intensity is observed
it is marked by a black pixel indicating a horizontal edge point. We continue this process for all rows of
pixel data to obtain a Horizontal Edge Map. In the next step, we scan the image vertically. Continuing
the process for all the columns we obtain a Vertical Edge Map image. Finally, we merge the Horizontal
Edge Map with Vertical Edge Map by performing a logical OR operation on the two image files, to
obtain the Edge map of brain image.

4.2 Algorithm
For Horizontal edge

Step 1. Scan the Image Array Horizontally from left-most pixel to right-most pixel from first row to
last row.
Step 2. Take the first pixel intensity value as a reference value.

Step 3. Compare intensity of subsequent pixels with the reference value. If the value is same continue
to next pixel.
Step 4. If the value differs, change the value of reference value to the pixel intensity value and mark
the pixel black.
Step 5. If the last row and column pixel is not reached then goto Step3.

Algorithm for Horizontal Image Map: The same process, as mentioned above, by replacing row wise
scanning with column wise scanning starting from top-most pixel to bottom-most pixel from first
column to last column and continue untill the last column and row pixel is not reached.

4.3 Result

Figure 4.1 Edge Detection

39
Chapter 5

Brain Contour Estimation


5.1 Overview
In MRI, one of the principal regions of interests is the brain. Currently in many clinical applications, the
boundary of a tumor in a head image is usually traced by hand. This manual approach becomes
infeasible when dealing with large data sets. For large data set manual approach to estimate brain
boundary by hand is no longer viable. Therefore, there is a great need for a computerized system to
perform the tumor boundary detection task. A range of methods including edge based, region based,
knowledge based, and combination approaches have been proposed for semiautomatic or automatic
detection of various structures in the head. Recently several attempts have also been made to apply
neural network architectures, soft computing to brain image analysis. For further processing it is of
utmost importance to extract the boundary of the brain. Here we now proposed a new method for
extraction of the breast boundary.

In our proposed method, we present a new approach to automatically detect brain contour using a
simple approach. In the very first step we now proceed with the output image of the edge extraction
process and our objective is to identify the outermost edge line that constitutes the edge of the brain. We
start checking horizontally, scanning from left most pixel to the right most pixel of the topmost row of
the brain image with the condition that if we find pixel value equal to 0 then fill that pixel with Blue
color and stop scanning rest of the pixel value of that row. This process continues until the last row of
the image reached. Similarly, also start checking horizontally, scanning from right most pixel to the left
most pixel of the topmost row of the brain image with the condition that if we find pixel value equal to
0 then fill that pixel with Blue color and stop scanning rest of the pixel value of that row. This process
continues until the last row of the image reached. The same process will be repeated by scanning the
image vertically from top to bottom according to the method for Horizontal Processing. Examples of
application of the method to different MRI data sets are presented to show the effectiveness of our
approach.

5.2 Proposed Algorithm


Horizontal scanning

Step 1: Scan Image from first row from left most pixel to right most pixel of the output image of the
edge extraction.

Step 2: Check if the value of pixel is equal to 0.

Step 3: If the value is equal to zero then set the pixel with blue color and go to the next row.

Step 4: If the last row and column pixel is not reached then goto Step2.

40
The same process, as mentioned above, by scanning output image of the edge extraction from
first row from right most pixel to left most pixel until the last row and column pixel is not reached

Algorithm for Horizontal Image scanning: The same process will be repeated by scanning the image
vertically from top to bottom and also for bottom to top according to the method for Horizontal
scanning Process.

5.3 Result

Figure 5.1 Contour Estimation

41
Chapter 6

Conclusion
6.1 Review of the contribution
In this project work, we mostly focus on exploring the brain contour from medical image. We proposed
an approach to integrate a new type of constraints. We want to detect the contour of the brain in an
automatic way. This proposed work has been applied to the segmentation of brain contour after
removing all the artifact of the MR image and well enhanced to get actual brain boundary. Results of
the proposed method show a reliable detection of brain contour , furthermore, due to its simple
procedure the method executes faster than other complicated methods. Performance evaluation in
algorithm design is a commonly neglected concept. In conclusion, it can be mentioned, our proposed
method is acceptably accurate, promising and comparable with any other standard methods.

6.2 Future work


In future, plan has been taken to explore the volume of the brain tumor with proper orientation. The
comparison of the quantitative results of tumor segmentation shows that the quality of the segmentation
for enhanced tumors is better than for the non-enhanced tumors because of their well-defined
boundaries. Improvement of the method for segmenting non-enhanced tumors could still be useful. One
of the future directions can be using the probability map, as proposed in for brain structures, to improve
the edge detection method.

42
Bibliography
 Websites:

Data set. Available at http://www.osirix-viewer.com/datasets/

Adult Brain Tumor. Available at http://cancernet.nci.nih.gov/

The Brain Tumor section of Cancer Care's Web Page. Available at


http://www.cancercareinc.org/campaigns/braintumor.htm

Latino Medicine: Latino Health Profile: Health Beliefs. Available at


http://www.latinomed.com/resources/latino_profile.html#beliefs

Texas Association of Latin American Medical Students. Available at


http://www.sga.utmb.edu/talams/

American Cancer Society's Adult Brain and Spinal Cord Cancer Resource Center. Available at
http://www.cancer.org/

The Migrant Health Program. Available at http://bphc.hrsa.gov/migrant/

Brain Tumor Home Page - National Cancer Institute. (n.d.). Comprehensive Cancer Information –
National Cancer Institute. Retrieved January 10, 2011, from
http://www.cancer.gov/cancertopics/types/brain

Photograph References

(Google images for all images) Brain tumor - Google Search. (n.d.). Google. Retrieved January 10,
2011, from
http://www.google.com/images? hl=en&q=brain+tumor&wrapid=tlif12948510160191&safe=stri
ct&um=1&ie=UTF-8&source=og&sa=N&tab=wi&biw=1259&bih=818

Video References

Jessicakaltenegger. (n.d.).YouTube - National Brain Tumor Society and Brain Cancer


Awareness.YouTube - Broadcast Yourself. Retrieved January 10, 2011, from
http://www.youtube.com/watch?v=Sfeq89zV5Go

Kubikop. (n.d.).YouTube- Are Cell Phones and Wireless Technology Dangerous?YouTube-


Broadcast Yourself. Retrieved January 10, 2011, from
http://www.youtube.com/watch?v=I0WLhFX75Ss

Livestrong. (n.d.). YouTube - What Is A Brain Tumor And What Are Common
Symptoms.YouTube- Broadcast Yourself. . Retrieved January 10, 2011, from
http://www.youtube.com/watch?v=aVabCtNOWgI

43
Livestrong. (n.d.). YouTube- Brain Tumor Symptoms. YouTube - Broadcast Yourself. Retrieved
January 10, 2011, from
http://www.youtube.com/watch?v=SYPivXo6wjY

CBS. (n.d.).YouTube - New Brain Cancer Treatment Offers Hope.YouTube - Broadcast Yourself.
Retrieved January 10, 2011, from
http://www.youtube.com/watch?v=8ujV3uXMvZU

 Journal:

[1] Hideki yamamoto and Katsuhiko Sugita, Noriki Kanzaki, Ikuo Johja and Yoshio
Hiraki,Michiyoshi Kuwahara , : “Magnetic Resonance Image Enhancement Using V- Filter”, IEEE
AES Magazine, June 1990.

[2] Zhu H, Francis HY, Lam FK, Poon PWF. Deformable region model for locating the boundary
of brain tumors. Proceedings of the IEEE 17th Annual Conference on Engineering in Medicine and
Biology, 1995.

[3] Chan.F.H.Y, Lam.F.K, Poon.P.W.F, Zhu.H, Chan.K.H.”Object boundary location by Region


and Contour Deformation”,IEEE,Proceedings on Visual Image Process,Vol.143,No.6,December
1996.

[4] Y. Zhu, H. Yang, Computerized tumor boundary detection using a Hopfield neural network,
IEEE Transactions on Medical Imaging, 1997.

[5] M.C. Clark, L.O. Lawrence, D.B. Golgof, R. Velthuizen, F.R. Murtagh, M.S. Silbiger,
Automatic tumor segmentation using knowledge-based techniques, IEEE Transactions on Medical
Imaging, 1998.

[6] Warfield S.K, Michael Kaus, Ferenc A. Jolesz , Ron Kikinis. Adaptive, template moderated,
spatially varying statistical classification , Science Direct on Medical Image Analysis,March 2000.

[7] Peter Lorenzen, Sarang Joshi, Guido Gerig, and Elizabeth Bullitt. Tumor-Induced Structural
and Radiometric Asymmetry in Brain Images, IEEE, 2001.

[8] Sean M.Haney,Paul M.Thompson,Timothy F.Cloughesy, Jeffry R.Alger,Arthur W.Toga.


Tracking tumor Growth Rates in Patients with Malignant Gliomas. A Test of two
Algorithms,,American Society of Neuroradiology,AJNR Am J Neuroradiol,22:73-82, January
2001.

[9] N. Moon, E. Bullitt, K.V. Leemput, G. Gerig, Model-based brain and tumor segmentation,
August, 2002.

[10] G. Moonis, J. Liu, J.K. Udupa, D.B. Hackney, Estimation of tumor volume with fuzzy-
connectedness segmentation of MR images, American Journal of Neuroradiology, 2002.

[11] S.M. Smith, Fast robust automated brain extraction, Human Brain Mapping, 2002.

[12] S. Ho, E. Bullitt, G. Gerig, Level set evolution with region competition: automatic 3D
segmentation of brain tumors, in: ICPR, Quebec, August 2002.

[13] K.W.Law,F.K.Law,Francis H.Y.Chan.”A Fast Deformable Region Model for Brain Tumor
Boundary Extraction”,IEEE ,oct 23, USA,2002.

44
[14] Nathan Moon, Elizabeth Bullitt, Koen Van Leemput, Guido Gerig . Automatic Brain and
Tumor Segmentation, MICCAI Preceeding,LNCS 2488(1):372-379, 2002.

[15] Amini L, Soltanian-Zadeh H, Lucas.C.“Automated Segmentation of Brain Structure from


MRI”, Proc. Intl.Soc.Mag.Reson.Med.11(2003).

[16] Xin Bai, Jesse S.Jin, Dagan Feng. Segmentation Based Multilayer Diagnosis Lossless
Medical Image Compression, Australian Computer Society,Pan Sydney Area Workshop on Visual
Information Processing (VIP 2003).

[17] Jayaram K.Udupa, Punam K.Saha. Fuzzy Connectedness and Image Segmentation,
Proceedings of the IEEE,vol.91,No 10,Oct 2003.

[18] Leung C.C, Chen W.F, Kwok P.C.K , Chan F.H.Y. Brain Tumor Boundary Detection in MR
Image with Generalized Fuzzy Operator,,IEEE, 2003.

[19] Marcel Prastawa,Elizabeth Bulitt,Nathan Moon,Koen Van Leemput,Guido Gerig, Automatic


Brain Tumor Segmentation by Subject Specific Modification of Atlas Priors”, Medical Image
Computing,2003.

[20] Marcel Prastawa,Elizabeth Bullitt,Scan Ho,Guido Gerig. Robust Estimation for Brain Tumor
Segmentation, USA.,2003.

[21] Lotsos.D, pyridonos.P, Petalas.P, Cavouras.D, Zolota.V, Dadioti.P, Lekka.I, Nikiforidis.G .A


Hierarchical Decision tree classification scheme for brain tumor astrocytoma grading using
Support Vector Machines,Proceedings of the 3rd international Symposium on Image and Signal
Processing and Analysis(2003).

[22] GPU-based level sets for 3D brain tumor segmentation, 2003.

[23] Alexander V. Tuzikov, O. Colliot, I. Bloch, Evaluation of the symmetry plane in 3D MR


brain images, Pattern Recognition, 2003.

[24] Jeffrey Solomon, A. Butman,Arun Sood. Data Driven Brain Tumor Segmentation in MRI
Using Probabilistic Reasoning over Space and Time, SpringerLink on Medical Image Computing,
vol 3216,2004.

[25] Ladan Amini Hamid Soltanian-Zadeh Caro Lucas. Computer Vision Segmentation of Brain
Structures from MRI Based on Deformable Contour Models.

[26] Azadeh yazdan-shahmorad,Hamid soltanian-zadeh,reza A.Zoroofi.”MRSI– Braintumor


characterization using Wavelet and Wavelet packets Feature spaces and Artificial Neural
Networks”,IEEE Transactions on EMBS,sep 1-5, 2004.

[27] M. Prastawa, E. Bullitt, S. Ho, G. Gerig, A brain tumor segmentation framework based on
outlier detection, Medical Image Analysis, 2004.

[28] A.S. Capelle, O. Colot, C. Fernandez-Maloigne, Evidential segmentation scheme of multi-


echo MR images for the detection of brain tumors using neighborhood information, Information
Fusion, 2004.

[29] M.B. Cuadra, C. Pollo, A. Bardera, O. Cuisenaire, J. Villemure, J.-P. Thiran, Atlas-based
segmentation of pathological MR brain images using a model of lesion growth, IEEE
Transactions on Medical Imaging, 2004

45
[30] Marcel Prastawa , Elizabeth Bullitt , Guido Gerig . Synthetic Ground Truth for Validation of
Brain Tumor MRI Segmentation”, SpringerLink on Medical Image Computing,vol 3749,2005.

[31] Schmidt, I. Levner, R. Greiner, A. Murtha, A. Bistritz, Segmenting brain tumors using
alignment-based features, in: IEEE Internat. Conf. on Machine learning and Applications, 2005.

[32] Daniel Goldberg-Zimring, Ion-Florin Talos, Jui G.Bhagwat, Steven J Haker, Peter M.Black,
and Kelly H.Zuo,.Statistical Validation of Brain Tumor Shape Approximation via Spherical
Harmonics for Image Guided Neurosurgery”.

[33] H. Khotanlou, J. Atif, O. Colliot, I. Bloch, 3D brain tumor segmentation using fuzzy
classification and deformable models, in: WILF, Crema, Italy, Lecture Notes in Computer
Science, Vol. 3849, Springer, Berlin, September, 2005.

[34] J. Zhou, K.L. Chan, V.F.H Chong, S.M. Krishnan, Extraction of brain tumor fromMR
images using one-class support vector machine, in: IEEE Conf. on Engineering in Medicine and
Biology, 2005.

[35] J. Corso, E. Sharon, A. Yuille, Multilevel segmentation and integrated Bayesian model
classification with an application to brain tumor segmentation, in: MICCAI2006, Copenhagen,
Denmark, Lecture Notes in Computer Science, Vol. 4191, Springer, Berlin, October 2006.

[36] Dimitris N Metaxas,Zhen Qian, Xiaolei Huang and Rui Huang,Ting Chen,Leon Axal.
Hybrid Deformable Models for Medical Segmentation and Registration, IEEE Transactions on
Medical Image processing, ICARCV ,2006.

[37] Shishir Dube, Suzie El-saden, Timothy F.Cloughesy, Usha Sinha. Content Based Image
Retrieval for MR image Studies of Brain Tumors. ,Proceedings of the 28th IEEE EMBS Annual
International Conference,NewYork City,USA,Auguest 30-September 3,2006.

[38] O. Colliot, O. Camara, I. Bloch, Integration of fuzzy spatial relations in deformable


models—application to brain MRI segmentation, Pattern Recognition, 2006.

[39] Iftekharuddin K. M, Zheng J., Islam M. A, Ogg R. J, .Brain Tumor Detection in MRI:
Technique and Statistical Validation,, Memphies,1983.

[40] Peter Lorenzen,Marcel Prastawa,Brad Davis,Guido Gerig,Elizabeth Bullitt,Sarang


Joshi.Multi- modal image set registration and atlas formation,Elsevier on Medical Image
Analysis,Vol 10,2006.

[41] Aaron Lefohn, Joshua Cates, Ross Whitaker.”Interactive GPU-Based level sets for 3D Brain
Tumor Segmentation”,April 16,2003.

[42] H. Khotanlou, O. Colliot, I. Bloch, Automatic brain tumor segmentation using symmetry
analysis and deformable models, in: Internat. Conf. on Advances in Pattern Recognition ICAPR,
Kolkata, India, January 2007

[43] Jan Luts, Arend Heerschap,Johan A.K.Suykens,Sabine Van Huffel,.A Combined MRI and
MRSI based Multiclass System for brain tumor recognition using LS-SVMs with class
probabilities and feature selection,,Elsevier,Artificial Intelligence in Medicine,40,87-102,2007.

[44] Hongmin Cai,Ragini Verma, Yangming Ou,Seung-koolee,Elias R.Melhem,Christos


Davatzaikos,:”Probabilistic Segmentation of braintumors Based on Multi-Modality Magnetic
Resonance Images”,IEEE,ISBI 2007.

46
[45] Marcel Prastawa,John Gilmore,Weili Lin,Guido Gerig. Automatic Segmentation of Neonatal
Brain MRI,USA, 2009

[46] T.Logeswari and M.Karnan , An Improved Implementation of Brain Tumor Detection Using
Segmentation Based on Hierarchical Self Organizing Map, 2010.

[47] Karteek Popuri ,· Dana Cobzas ,· Albert Murtha, Martin Jägersand. “3D variational brain
tumor segmentation using Dirichlet priors on a clustered feature set” , Int J CARS DOI
10.1007/s11548-011-0649-2, July 2011.

[48] R. Rajeswari P. Anandhakumar , Segmentation and Identification of Brain Tumor MRI


Image with Radix4 FFT Techniques , 2011

[49] Samiksha Chugh and S. Mahesh Anand , Semi Automated Tumor Segmentation from MRI
Images Using Local Statistics Based Adaptive Region Growing, January, 2012

[50] Sudipta roy and Samir Kumar Bandyopadhay, Detection and Quantification of brain tumor
from MRI of Brain and it’s symmetric analysis, 2012.

[51] Indra Kanta Maitra, Sanjay Nag and Prof. Samir K. Bandyopadhyay, Automated Digital
Mammogram Segmentation for Detection of Abnormal masses using Binary Homogeneity
Enhancement Algorithm, Indian Journal of Computer Science and Engineering (IJCSE), Jun-Jul,
2011.

47
Appendix- Implementation
Now we are going to present typical implementation of the project “Automatic Brain Contour Detection
on MRI”. The implementation part based on Visual C# 2008.

Form1.cs

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;

namespace WindowsFormsApplication1
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
OpenFileDialog ofd = new OpenFileDialog();

private void button1_Click(object sender, EventArgs e)


{
ofd.Filter = "bmp files (*.bmp)|*.bmp|All files
(*.*)|*.*";
ofd.ShowDialog();
textBox1.Text=ofd.FileName;
}

private void button2_Click(object sender, EventArgs e)


{
button1.Enabled = false;
button2.Enabled = false;
button3.Enabled = false;

String svName = textBox1.Text;


String svName1 = textBox1.Text; ;
GauSmooth gs = new GauSmooth(svName);
gs.Smooth();
gs.save(svName);

48
MidPointQuant_Mode hq = new MidPointQuant_Mode(svName +
"_GS.bmp");
hq.perform();
hq.saveBitmap(svName + "_h");

MidPointVert_Mode vq = new MidPointVert_Mode(svName +


"_h.bmp");
vq.perform();
vq.saveBitmap(svName + "_hv");

MonoToneHorzEdge he = new MonoToneHorzEdge(svName +


"_hv.bmp");
he.getEdge();
he.saveBitmap(svName + "_hor_Ed");

MonoToneVertEdge ve = new MonoToneVertEdge(svName +


"_hv.bmp");
ve.getEdge();
ve.saveBitmap(svName + "_ver_Ed");

ImageOR io = new ImageOR(svName + "_hor_Ed.bmp");


io.ORTo(svName + "_ver_Ed.bmp");
io.saveBitmap(svName + "_Edge");

Contour cn = new Contour(svName + "_Edge.bmp");


cn.ConTour();
cn.saveBitmap(svName + "_contour");

MessageBox.Show("Program Executed Successfully.");


button1.Enabled = true;
button2.Enabled = true;
button3.Enabled = true;

private void button3_Click(object sender, EventArgs e)


{
textBox1.Text = "";
}

private void Form1_Load(object sender, EventArgs e)


{

}
}

49
Form1.cs (Design)

Program.cs

using System;
using System.Collections.Generic;
using System.Linq;
using System.Windows.Forms;

namespace WindowsFormsApplication1
{
static class Program
{

/// The main entry point for the application.


[STAThread]
static void Main()
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new Form1());

}
}
}

50
GauSmooth.cs

using System;
using System.Collections.Generic;
using System.Drawing;
using System.Drawing.Imaging;
using System.IO;
using System.Text;

class GauSmooth
{
Bitmap res;
Bitmap org;
Int32 GConst = 1000;
Int32[,] filter = new Int32[7, 7]{{0,0,0,1,0,0,0},
{0,1,9,17,9,1,0},
{0,9,57,106,57,9,0},
{1,17,106,196,106,17,1},
{0,9,57,106,57,9,0},
{0,1,9,17,9,1,0},
{0,0,0,1,0,0,0}};

public GauSmooth(String path)


{
org = new Bitmap(path);
res = new Bitmap(org.Width, org.Height);
}

public void Smooth()


{
for (Int32 i = 3; i < res.Height - 3; i++)
{
for (Int32 j = 3; j < res.Width - 3; j++)
{
Int32 sum = 0;
for (Int32 k = -3; k <= 3; k++)
{
for (Int32 l = -3; l <= 3; l++)
{
Int32 gval = org.GetPixel((j + l), (i +
k)).B;
Int32 fval = filter[(l + 3), (k + 3)];
sum += gval * fval;
}
}
sum /= GConst;
if (sum > 255)
sum = 255;
if (sum < 0)
sum = 0;
res.SetPixel(j, i, Color.FromArgb(255, sum, sum,
sum));
}
}
}

51
public void save(String sname)
{
res.Save(sname + "_GS.bmp");
}

52
MidPointQuant_Mode.cs

using System;
using System.Collections;
using System.Collections.Generic;
using System.Text;
using System.IO;
using System.Drawing;
using System.Drawing.Imaging;

public struct locVal


{
public Int32 ht;
public Int32 st;
public Int32 lt;

public locVal(Int32 h, Int32 s, Int32 l)


{
ht = h;
st = s;
lt = l;
}
}

class MidPointQuant_Mode
{
Bitmap org;
Bitmap res;
Int32 factor;
Stack s = new Stack();

public MidPointQuant_Mode(string path)


{
org = new Bitmap(path);
res = new Bitmap(org.Width, org.Height);
factor = 4;

}
public MidPointQuant_Mode()
{
}

public Bitmap Orginal


{
get { return org; }
set
{
org = value;
res = new Bitmap(org.Width, org.Height);
factor = 4;
}
}

public void perform()


{

53
for (Int32 i = 0; i < org.Height; i++)
{
Int32 j = 0;
s.Push(new locVal(i, j, org.Width - 1));
check();
}
}

private void check()


{
Int32 h, start, last;
locVal lv;
if (s.Count == 0)
{
return;
}
else
{
lv = (locVal)s.Pop();
h = lv.ht;
start = lv.st;
last = lv.lt;
if (start == last)
{
calc(h, start, last);
check();
}
else
{
Int32 j = start;
Int32 val = org.GetPixel(j, h).B;
Int32 k;
for (k = j + 1; k <= last; k++)
{
Int32 kVal = org.GetPixel(k, h).B;
if (kVal < val - factor || kVal > val +
factor)
{
Int32 midVal = (start + last) / 2;
s.Push(new locVal(h, start, midVal));
s.Push(new locVal(h, midVal + 1, last));
break;
}
}
if (k > last)
{
calc(h, start, last);
check();
}
else
check();
}
}
}

54
private void calc(Int32 h, Int32 w, Int32 d)
{
//The Pixel is Replaced By The Mode value and Quantized
if (w == d)
{
Color c = org.GetPixel(w, h);
Int32 imgVal = c.B;
imgVal = quant(imgVal);
Int32 a = c.A;
a = quant(a);
res.SetPixel(w, h, Color.FromArgb(a << 24 | imgVal <<
16 | imgVal << 8 | imgVal));
}
else
{
Int32[] hist = new Int32[256 / 16];
Byte gVal;
for (Int32 ctr = w; ctr <= d; ctr++)
{
gVal = org.GetPixel(ctr, h).B;
gVal = (Byte)quant(gVal);
hist[gVal / 16] += 1;
}
Byte fVal = 0, maxVal = 0, modVal = 0;
for (Int32 ctr = 0; ctr < 256 / 16; ctr++)
{
if (fVal == 0 && hist[ctr] > 0)
fVal = (Byte)ctr;
if (hist[ctr] > maxVal)
{
maxVal = (Byte)hist[ctr];
modVal = (Byte)ctr;
}
}
if (modVal == 0)
gVal = (Byte)(fVal * 16);
else
gVal = (Byte)(modVal * 16);
for (Int32 ctr = w; ctr <= d; ctr++)
res.SetPixel(ctr, h, Color.FromArgb(255, gVal,
gVal, gVal));
}
return;
}

public void saveBitmap(String path)


{
res.Save(path + ".bmp");
}
private Int32 quant(Int32 iVal)
{
return (iVal / 16) * 16;
}
}

55
MidPointVert_Mode.cs

using System;
using System.Collections;
using System.Collections.Generic;
using System.Text;
using System.IO;
using System.Drawing;
using System.Drawing.Imaging;

public struct VerVal


{
public Int32 wd;
public Int32 st;
public Int32 lt;

public VerVal(Int32 w, Int32 s, Int32 l)


{
wd = w;
st = s;
lt = l;
}
}

class MidPointVert_Mode
{
Bitmap org;
Bitmap res;
Int32 factor;
Stack s = new Stack();

public MidPointVert_Mode(string path)


{
org = new Bitmap(path);
res = new Bitmap(org.Width, org.Height);
factor = 4;

}
public MidPointVert_Mode()
{
}

public Bitmap Orginal


{
get { return org; }
set
{
org = value;
res = new Bitmap(org.Width, org.Height);
factor = 4;
}
}

public void perform()

56
{
for (Int32 i = 0; i < org.Width; i++)
{
Int32 j = 0;
s.Push(new VerVal(i, j, org.Height - 1));
check();
}
}

private void check()


{
Int32 w, start, last;
VerVal vv;
if (s.Count == 0)
{
return;
}
else
{
vv = (VerVal)s.Pop();
w = vv.wd;
start = vv.st;
last = vv.lt;
if (start == last)
{
calc(w, start, last);
check();
}
else
{
Int32 j = start;
Int32 val = org.GetPixel(w, j).B;
Int32 k;
for (k = j + 1; k <= last; k++)
{
Int32 kVal = org.GetPixel(w, k).B;
if (kVal < val - factor || kVal > val +
factor)
{
Int32 midVal = (start + last) / 2;
s.Push(new VerVal(w, start, midVal));
s.Push(new VerVal(w, midVal + 1, last));
break;
}
}
if (k > last)
{
calc(w, start, last);
check();
}
else
check();
}
}
}

57
private void calc(Int32 w, Int32 h, Int32 d)
{
//The Pixel is Replaced By The Mode value and Quantized
if (h == d)
{
Color c = org.GetPixel(w, h);
Int32 imgVal = c.B;
imgVal = quant(imgVal);
Int32 a = c.A;
a = quant(a);
res.SetPixel(w, h, Color.FromArgb(a << 24 | imgVal <<
16 |imgVal << 8 | imgVal));
}
else
{
Int32[] hist = new Int32[256 / 16];
Byte gVal;
for (Int32 ctr = h; ctr <= d; ctr++)
{
gVal = org.GetPixel(w, ctr).B;
gVal = (Byte)quant(gVal);
hist[gVal / 16] += 1;
}
Byte fVal = 0, maxVal = 0, modVal = 0;
for (Int32 ctr = 0; ctr < 256 /16; ctr++)
{
if (fVal == 0 && hist[ctr] > 0)
fVal = (Byte)ctr;
if (hist[ctr] > maxVal)
{
maxVal = (Byte)hist[ctr];
modVal = (Byte)ctr;
}
}
if (modVal == 0)
gVal = (Byte)(fVal * 16);
else
gVal = (Byte)(modVal * 16);
for (Int32 ctr = h; ctr <= d; ctr++)
res.SetPixel(w, ctr, Color.FromArgb(255, gVal,
gVal, gVal));
}
return;
}

public void saveBitmap(String path)


{
res.Save(path + ".bmp");
}
private Int32 quant(Int32 iVal)
{
return (iVal / 16) * 16;
}
}

58
MonoToneHorzEdge.cs

using System;
using System.Collections;
using System.Collections.Generic;
using System.Text;
using System.IO;
using System.Drawing;
using System.Drawing.Imaging;

class MonoToneHorzEdge
{
Bitmap org;
Bitmap res;

public MonoToneHorzEdge(string path)


{
org = new Bitmap(path);
res = new Bitmap(org.Width, org.Height);
blankImg();
}
public MonoToneHorzEdge()
{
}

public Bitmap Orginal


{
get { return org; }
set
{
org = value;
res = new Bitmap(org.Width, org.Height);

}
}

public void getEdge()


{
for (Int32 i = 0; i < org.Height; i++)
{
for (Int32 j = 0; j < org.Width; )
{
Int32 val = org.GetPixel(j, i).B;
Int32 k;
for (k = j + 1; k < org.Width; k++)
{
Int32 kVal = org.GetPixel(k, i).B;
if (kVal != val)
break;
}
drawEdge(i, k - 1);
j = k;
}

59
}

private void drawEdge(Int32 h, Int32 w)


{
Byte imgVal = 0;
if (w != org.Width - 1)
res.SetPixel(w, h, Color.FromArgb(255, imgVal,
imgVal, imgVal));
}

public void saveBitmap(String path)


{
res.Save(path + ".bmp");
}

private void blankImg()


{
Int32 c = 255;
for (Int32 i = 0; i < res.Height; i++)
for (Int32 j = 0; j < res.Width; j++)
res.SetPixel(j, i, Color.FromArgb(c, c, c, c));
}
}

60
MonoToneVertEdge.cs

using System;
using System.Collections;
using System.Collections.Generic;
using System.Text;
using System.IO;
using System.Drawing;
using System.Drawing.Imaging;

class MonoToneVertEdge
{
Bitmap org;
Bitmap res;

public MonoToneVertEdge(string path)


{
org = new Bitmap(path);
res = new Bitmap(org.Width, org.Height);
blankImg();
}
public MonoToneVertEdge()
{
}

public Bitmap Orginal


{
get { return org; }
set
{
org = value;
res = new Bitmap(org.Width, org.Height);

}
}

public void getEdge()


{
for (Int32 i = 0; i < org.Width; i++)
{
for (Int32 j = 0; j < org.Height; )
{
Int32 val = org.GetPixel(i, j).B;
Int32 k;
for (k = j + 1; k < org.Height; k++)
{
Int32 kVal = org.GetPixel(i, k).B;
if (kVal != val)
break;
}
drawEdge(i, k - 1);
j = k;
}
}

61
}

private void drawEdge(Int32 w, Int32 h)


{
Byte imgVal = 0;
if (h != org.Height - 1)
res.SetPixel(w, h, Color.FromArgb(255, imgVal,
imgVal, imgVal));
}

public void saveBitmap(String path)


{
res.Save(path + ".bmp");
}

private void blankImg()


{
Int32 c = 255;
for (Int32 i = 0; i < res.Height; i++)
for (Int32 j = 0; j < res.Width; j++)
res.SetPixel(j, i, Color.FromArgb(c, c, c, c));
}
}

62
ImageOR.cs

using System;
using System.IO;
using System.Drawing;
using System.Drawing.Imaging;
using System.Collections.Generic;
using System.Text;
class ImageOR
{
Bitmap baseImg;
Bitmap resImg;
public ImageOR(string path)
{
baseImg = new Bitmap(path);
}
public void ORTo(string compPath)
{
Bitmap compImg = new Bitmap(compPath);
Int32 res_width;
Int32 res_Height;
if (baseImg.Width != compImg.Width || baseImg.Height !=
compImg.Height)
{
res_width = (baseImg.Width > compImg.Width) ?
baseImg.Width : compImg.Width;
res_Height = (baseImg.Height > compImg.Height) ?
baseImg.Height : compImg.Height;

}
else
{
res_width = baseImg.Width;
res_Height = baseImg.Height;
}
resImg = new Bitmap(res_width, res_Height);
for (Int32 i = 0; i < res_Height; i++)
{
for (Int32 j = 0; j < res_width; j++)
{
Byte bVal;
Byte cVal;
try
{
bVal = baseImg.GetPixel(j, i).B;
}
catch (Exception ex)
{
bVal = 255;
}
try
{
cVal = compImg.GetPixel(j, i).B;
}
catch (Exception ex)
{

63
cVal = 255;
}

if (bVal == 255 && cVal == 255)


{
resImg.SetPixel(j, i, Color.FromArgb(255,
255, 255, 255));
}
else
{
resImg.SetPixel(j, i, Color.FromArgb(255, 0,
0, 0));
}
}
}
Int32 HorzRef = 0;
for (Int32 j = 0; j < resImg.Width; j++)
{
if (resImg.GetPixel(j, 0).B == 0)
{
while (resImg.GetPixel(j, 0).B == 0)
{
j++;
}
HorzRef = j - 1;
break;
}
}
for (Int32 k = 0; k < resImg.Height; k++)
{
resImg.SetPixel(HorzRef, k, Color.FromArgb(255, 0, 0,
0));
for (Int32 p = 0; p < HorzRef; p++)
resImg.SetPixel(p, k, Color.FromArgb(255, 255,
255,255));
}
Int32 rwd = 0;
Int32 lwd = HorzRef;
for (Int32 k = resImg.Width - 1; k > HorzRef; k--)
{
if (resImg.GetPixel(k, resImg.Height - 1).B == 0)
{
rwd = k;
break;
}
}
if (HorzRef != rwd)
{
for (Int32 l = HorzRef + 1; l < rwd; l++)
{
if (resImg.GetPixel(l, resImg.Height - 1).B == 0)
{
lwd = l;
break;
}
}

64
for (Int32 l = lwd; l <= rwd; l++)
resImg.SetPixel(l, resImg.Height - 1,
Color.FromArgb(255, 0, 0, 0));
}

}
public void saveBitmap(String path)
{
resImg.Save(path + ".bmp");
}
}

65
Contour.cs

using System;
using System.Collections;
using System.Collections.Generic;
using System.Text;
using System.Drawing;
using System.Drawing.Drawing2D;

class Contour
{

Bitmap res;
Bitmap org;

public Contour(String path)


{
org = new Bitmap(path);
res = new Bitmap(org.Width, org.Height);
}

public void ConTour ()


{
blankImg();

for (Int32 j = org.Height - 1; j > 1; j--)


{
for (Int32 i = 1; i < org.Height; i++)
{
int imgval = org.GetPixel(j, i).B;

if (imgval == 0)
{
res.SetPixel(j, i, Color.Blue);
break;
}

}
}

for (Int32 j = 1; j < org.Height; j++)


{
for (Int32 i = org.Height - 1; i > 1; i--)
{
int imgval = org.GetPixel(j, i).B;

if (imgval == 0)
{
res.SetPixel(j, i, Color.Blue);
break;
}

}
}

66
for (Int32 i = 1; i < org.Width; i++)
{
for (Int32 j = 1; j < org.Height; j++)
{
int imgval = org.GetPixel(j, i).B;

if (imgval == 0)
{
res.SetPixel(j, i, Color.Blue);
break;
}

}
}

for (Int32 i = org.Width - 1; i > 1; i--)


{
for (Int32 j = org.Height - 1; j > 1; j--)
{
int imgval = org.GetPixel(j, i).B;

if (imgval == 0)
{
res.SetPixel(j, i, Color.Blue);
break;
}

}
}

public void saveBitmap(String path)


{
res.Save(path + ".bmp");
}

private void blankImg()


{
Int32 c = 255;
for (Int32 i = 0; i < res.Height; i++)
for (Int32 j = 0; j < res.Width; j++)
res.SetPixel(j, i, Color.FromArgb(c, c, c, c));
}
}

67
Output :

Set I :

Origanal image After Smoothing After Horizontal Enhancement

After Vertical Enhancement Horizontal Edge map Vertical Edge map

Edge detection Contour Estimation

68
Set II:

Origanal image After Smoothing After Horizontal Enhancement

After Vertical Enhancement Horizontal Edge map Vertical Edge map

Edge detection Contour Estimation

69
Set III:

Origanal image After Smoothing After Horizontal Enhancement

After Vertical Enhancement Horizontal Edge map Vertical Edge map

Edge detection Contour Estimation

70

Potrebbero piacerti anche