Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
1
Contents
1. What is signal and system? ................................................................................................................. 3
1.1 Representation of continious time (Analog) sinusodial signal: .................................................... 3
1.1.1 Properties of Analog Signal ................................................................................................... 4
1.2 Representation of discrete time (Digital) sinusodial signal: ......................................................... 4
1.2.1 Properties of descrete time (Digital) sinusodial signal: ......................................................... 5
2. Signal Processing: ............................................................................................................................... 5
2.1 Basic Elements of a Signal Processing System: ........................................................................... 6
2.1.1 Analog Signal Processing ...................................................................................................... 6
2.2.2 Digital Signal Processing: ...................................................................................................... 6
3. Images and Pictures: ......................................................................................................................... 11
3.1 Type of Images: .......................................................................................................................... 11
3.2 Image Processing ........................................................................................................................ 12
3.2.1 Analog image processing: .................................................................................................... 12
3.2.2 Digital image processing:..................................................................................................... 12
3.2.3. INDEX TERMS .................................................................................................................. 14
3.3 ANALYSIS of Image Processing ............................................................................................... 17
3.4 MODELING: .............................................................................................................................. 20
APPLICATIONS .............................................................................................................................. 21
ADVANTAGES: .............................................................................................................................. 22
DISADVANTAGES: ....................................................................................................................... 23
4. MEDICAL IMAGING...................................................................................................................... 24
4.1 Medical imaging modalities ........................................................................................................ 24
5. Overview of implicit active contours ............................................................................................... 34
5.1 Choice of the speed function ..................................................................................................... 35
5.2 The placement of the initial contour ........................................................................................ 35
5.3 Calculation of the distance function.......................................................................................... 36
5.4 The discretization of the level set equation ................................................................................. 36
5.5 Level Set Method ....................................................................................................................... 37
References ………………………………………………………………………………………………………………………………………38
2
1. What is signal and system?
Signal: A signal is a physical quantity that varies with time, space or any other independent
variable by which information can be conveyed. A signal is an electric current or
electromagnetic field used to convey data from one place to another.
There are two types of signal: 1). Analog Signal 2). Digital Signal
Continuous-time signals or Analog Signals: A signal that varies continuously with time is
called continuous-time signal. These are defined for every value of independent variable,
namely, time. For example speech signal and temperature of the room are continuous-time
signals [1].
Discrete-time Signals or Digital Signals: Discrete-time signals are signals which are
defined at discrete times. These are represented by sequences of numbers. For example: Rail
traffic signal is a discrete-time signal. Discrete-time signals can be recovered by periodic
sampling and quantization of continuous-time signals [1]..
Discrete-Valued Signals: If the signal takes on values from a finite set of possible values, it
is said to be discrete-valued signal. Usually, these values are equidistant and hence can be
expressed as an integer multiple of the distance between two successive values.
The subscription a used with x(t)denoted as analog signal. This signal is completely
characterize by three parameters: A is the amplitude of the sinusoid, Ω is the frequency and 𝜃
is the phase.
4
So the relation becomes
System: A system may also be defined as a physical device that performs an operation on a
signal. System is any physical set of components that takes a signal, and produces a signal.
2. Signal Processing:
Signal processing is the science of analysing, synthesizing, sampling, encoding, transforming,
decoding, enhancing, transporting, archiving, and in general manipulating signals in some
way [7].
5
2.1 Basic Elements of a Signal Processing System:
2.1.1 Analog Signal Processing:
Generally most of the signal encountered in science and engineering are analog in nature. It
means the signals are functions of a continuous variable such as time or space, and usually
take one values in a continuous range.
These signals may be processed directly by selecting appropriate analog system for the
objective of changing their characteristics. In this case we say that the signal has been
processed directly in its analog form. The input signals as well as output signal are in analog
form [8].
Digital Signal processing is an alternative method for processing the analog signal as shown
below. For performing the processing digitally there is a need for an interface between the
analog signal and the digital processor. This interface is named as analog to digital converter.
The digital signal processor may be either a large programmable digital computer or a small
microprocessor programmed for performing the desired operation on the digital input signal
[9].
6
To perform the processing digitally, an interface known as an analog-to-digital (A/D)
converter is place between the analog signal and the digital signal processor. The output of
the A/D converter is a digital signal which serves an input to the digital processor.
The digital signal processor may be a large programmable digital computer or a small
microprocessor programmed to perform the desired operations on the input signal. Also, it
can be a hardwired digital processor configured to perform a specified set of operations on
the input signal. Programmable machines are used to change the signal processing operations
through a change in the software, since hardwired machines are difficult to recon figure.
However, in some cases, where the digital output from the digital signal processor is to be
given to the user in analog form, such as speech communications, we must use another
interface known as digital –to-Analog (D/A) Converter, which convert the signal from digital
domain to analog domain. Thus, the signal is provided to the user in analog form [9].
Although, in some signal analysis, the desired information could be in digital form. In
such case, the D/A converter is not necessary.
7
Continuous time Discrete time Quantized signal
digital data
signal xa(t) signal x(nT) xq(n)
011001100…
Sampling: This is the conversion of continuous time signal into a discrete time signal
by taking samples of the continuous time signal at discrete time instants.
Sampling is the reduction of a continuous signal to a discrete signal. A common
example is the conversion of a sound wave (a continuous signal) to a sequence of
samples (a discrete-time signal) [12].
Fig.7 : The continuous signal is represented with a green colored line while the
discrete samples are indicated by the blue vertical lines [13].
8
A sample is a value or set of values at a point in time and/or space. A sampler is a
subsystem or operation that extracts samples from a continuous signal. A theoretical
ideal sampler produces samples equivalent to the instantaneous value of the
continuous signal at the desired points.
In practice, the continuous signal is sampled using an analog-to-digital converter
(ADC), a device with various physical limitations. This result in deviations from the
theoretically perfect reconstruction, collectively referred to as distortion .
Sampling Theorem: If the highest frequency contained in an analog signal y(t) is
Fmax = B and the signal is sampled at a rate FS > 2Fmax = 2B, then y(t) can be exactly
recovered from its sample values using the interpolation function
(5)
(6)
Quantization: This is the conversion of discrete time continuous valued signal into
discrete time discrete valued signal. The value of each signal sample is represented by a
value selected from a finite set of possible values. The difference between non-quantized
sample and quantized output is known as quantization error.
Quantization error: In analog-to-digital conversion, the difference between the actual
analog value and quantized digital value is called quantization error or quantization
distortion. The difference between non-quantized sample and quantized output is known
as quantization error [14]
9
the quantized signal. There are a variety of ways to do the encoding process and they are
called as coding techniques. All these coding techniques aim at reducing the redundancy
of the data. Where redundancy is nothing but the repetition of data which increases the
length of code and ultimately increases the memory required. So if we remove the
redundant data then we can represent the same data with less number of bits. Some of the
coding methods are Huffman coding, Shannon-Fano coding, Arithmetic coding etc.,
After the Encoding is done, the entire analog signal is now can be represented as a
sequence of binary bits, which is nothing but the signal in digital form [15].
Improvement of Accuracy in ADC
Two important methods are used for improving the accuracy in ADC. They are by increasing
the resolution and by increasing the sampling rate. This is shown in figure below (fig.8).
a digital-to-analog converter (DAC) is a device for converting a digital (usually binary)
code to an analog signal (current, voltage or charges).DACs are the interface between the
abstract digital world and the analog real life. Simple switches, a network of resistors, current
sources or capacitors may implement this conversion. An ADC performs the reverse
operation [15].
0
1
0
1
1 0 1 01 1
0 0 0 10 0
0 1 0 11 1
1 1 1 10 1
DAC
Fig.8 Digital to analog conversion [16]
10
3. Images and Pictures:
Human beings are predominantly visual creatures. We not only look at things to identify and
classify them, but we can scan for differences, and obtain an overall rough feeling for a scene
with a quick glance.
Humans have evolved very precise visual skills: we can identify a face in an instant; we can
differentiate colours; we can process a large amount of visual information very quickly.
(ii) Grey scale: Each pixel is a shade of grey, normally from 0 (black) to 255 (white). This
range means that each pixel can be represented by eight bits [17]..
(ii) RGB (or True) Images: Here each pixel has a particular colour; that colour being
described by the amount of red, green and blue in it. If each of these components has a range
0 to 255, this gives a total of 2553 = 1, 67, 77,216 different possible colours in the image.
This is enough colours for any image. Since the total number of bits required for each pixel is
24, such images are also called 24-bit colour images. Such an image may be considered as
consisting of a stack of three matrices; representing the red, green and blue values for each
pixel. This means that for every pixel there correspond three values [17]..
(iv) Indexed: Most colour images only have a small subset of the more than sixteen million
possible colours. For convenience of storage and file handling, the image has an associated
colour map, or colour palette, which is simply a list of all the colours used in that image.
Each pixel has value which does not give its colour (as for an RGB image), but an index to
the colour in the map. It is convenient if an image has 256 colours or less, for then the index
values will only require one byte each to store [17]..
11
3.2 Image Processing
Image processing pertains to the alteration and analysis of pictorial information. Common
case of image processing is the adjustment of brightness and contrast controls on a television
set by doing this we enhance the image until its subjective appearing to us is most appealing.
The biological system (eye, brain) receives, enhances, and dissects analyzes and stores
images at enormous rates of speed.
Basically there are two-methods for processing pictorial information. They are:
I. Optical processing
II. Electronic processing.
Optical processing uses an arrangement of optics or lenses to carry out the process. An
important form of optical image processing is found in the photographic dark room.
Electronic image processing is further classified as:
(i). Analog processing
(ii). Digital processing.
Analog iamge processing is the control of brightness and contrast of television image. The
television signal is a voltage level that varies In amplitude to represent brightness throughout
the image by electrically altering these signals , we correspondingly alter the final displayed
image appearance [9] .
The first computer-generated digital images were produced in the early 1960s, alongside
development of the space program and in medical research. A digital image is a
representation of a two-dimensional image using ones and zeros (binary). Depending on
whether or not the image resolution is fixed, it may be of vector or raster type. Without
qualifications, the term "digital image" usually refers to raster images also called bitmap
images.
12
Fig. 9: Analog and Digital image [18]
Digital image processing is the use of computer algorithms to perform image processing on
digital images. As a subcategory or field of digital signal processing, digital image processing
has many advantages over analog image processing. It allows a much wider range of
algorithms to be applied to the input data and can avoid problems such as the build-up of
noise and signal distortion during processing. Since images are defined over two dimensions
(perhaps more) digital image processing may be modeled in the form of Multidimensional
Systems [9]. Digital image processing allows the use of much more complex algorithms for
image processing, and hence, can offer both more sophisticated performance at simple tasks,
and the implementation of methods which would be impossible by analog means.In
particular, digital image processing is the only practical technology for:
Classification
Feature extraction
Pattern recognition
Projection
Multi-scale signal analysis
Processing of digital images by means of digital computer refers to digital image processing.
Digital images are composed of finite number of elements of which has a particular location
value. Picture elements, image elements, and pixels are used as elements for digital image
processing [9].
13
print form or in a digital form i.e., (technically an image is a two-dimensional light intensity
function. In other words it is a data intensity values arranged in a two dimensional form, the
required property of an image can be extracted from processing an image. Image is typically
by stochastic models. It is represented by AR model. Degradation is represented by MA
model.
Other form is orthogonal series expansion. Image processing system is typically non-casual
system. Image processing is two dimensional signal processing. Due to linearity Property, we
can operate on rows and columns separately. Image processing is vastly being implemented
by “Vision Systems” in robotics. Robots are designed, and meant, to be controlled by a
computer or similar devices. While “Vision Systems” are most sophisticated sensors used in
Robotics. They relate the function of a robot to its environment as all other sensors do.
A. IMAGE PROCESSING:
Image processing is a subclass of signal processing concerned specifically with
Pictures. Improve image quality for human perception and/or computer interpretation. Image
Enhancement
14
B. IMAGE RESTORATION:
Improving the appearance of an image tend to be based on mathematical or
probabilistic models of image degradation>
Example:
D. WAVELETS
Foundation for representing images in various degrees of resolution. Used in image
data compression and pyramidal representation (images are subdivided successively into
smaller regions) [21]
E. COMPRESSION
F. MORPHOLOGICAL PROCESSING
Tools for extracting image components that are useful in the representation and
description of shape [21].
15
Fig. 12 MORPHOLOGICAL PROCESSING [21]
G. IMAGE SEGMENTATION
The main purpose of medical image segmentation is to find a collection of high correlated
parts or tissues, so that the computers or doctors can do further analysis on each part. Because
of some properties of medical image, such as low image contrast, noise, and diffuse
boundary, medical image segmentation often face some difficult challenges. Image
segmentation is a traditional issue in image processing. There have been many researches on
this topic, and many methods have been proposed such as zero crossing, thresholding , region
based segmentation , watershed , and level set method. Some of the above methods are
gradient-based and are vulnerable to weak edges. Some of methods are intensity-based and
are vulnerable to noises. But the medical images have both of these two properties that we
need to conquer. In this paper, we use the level set method incorporated with some
mechanisms to do medical image segmentation because of its some advantages which will be
introduced in the next section [20].
16
3.3 ANALYSIS of Image Processing
The following is the overall view and analysis of Image Processing.
a solution
Knowledge Base
Image Acquisition
IM
Representation
and Description
Pre processing
Segmentation
17
I. IMAGE ACQUISITION:
Recognition is the process that assigns a label to an object based on the information
provided by its descriptors. Interpretation is assigning meaning to an ensemble of recognized
objects.
III. SEGMENTATION:
Segmentation is the generic name for a number of different techniques that divide the
image into segments of its constituents. The purpose of segmentation is to separate the
information contained in the image into smaller entities that can be used for other purposes
[21].
Representation and Description transforms raw data into a form suitable for the
Recognition processing [21].
V. KNOWLEDGE BASE:
A problem domain detailing the regions of an image where the information of interest
is known to be located is known as knowledge base. It helps to limit the search [21].
VI. THRESHOLDING:
18
Thresholding is the process of dividing an image into different portions by picking a
certain grayness level as a threshold, comparing each pixel value with the threshold, and then
assigning the pixel to the different portions, depending on whether the pixel’s grayness level
is below the threshold or above the threshold value. Thresholding can be performed either at
a single level or at multiple levels, in which the image is processed by dividing it into ”
layers”, each with a selected threshold [21].
VII. CONNECTIVITY:
Like other signal processing mediums, Vision Systems contains noises. Some noises
are systematic and come from dirty lenses, faulty electronic components, bad memory chips
and low resolution. Others are random and are caused by environmental effects or bad
lighting. The net effect is a corrupted image that needs to be pre-processed to reduce or
eliminate the noise. In addition, sometimes images are not of good quality, due to both
hardware and software inadequacies; thus, they have to be enhanced and improved before
other analysis can be performed on them [21].
A mask may be used for many different purposes, including filtering operations and
noise reduction. Noise and Edges produces higher frequencies in the spectrum of a signal. It
is possible to create masks that behave like a low pass filter, such that higher frequencies of
an image are attenuated while the lower frequencies are not changed very much. There by the
noise is reduced [21].
X. EDGE DETECTION:
19
Edge Detection is a general name for a class of routines and techniques that operate
on an image and results in a line drawing of the image. The lines represented changes in
values such as cross sections of planes, intersections of planes, textures, lines, and colors, as
well as differences in shading and textures. Some techniques are mathematically oriented,
some are heuristic, and some are descriptive. All generally operate on the differences
between the gray levels of pixels or groups of pixels through masks or thresholds. The final
result is a line drawing or similar representation that requires much less memory to be stored,
is much simpler to be processed, and saves in computation and storage costs. Edge detection
is also necessary in subsequent process, such as segmentation and object recognition. Without
edge detection, it may be impossible to find overlapping parts, to calculate features such as a
diameter and an area or to determine parts by region growing[21].
Electronic images contain large amounts of information and thus require data
transmission lines with large bandwidth capacity. The requirements for the temporal and
spatial resolution of an image, the number of images per second, and the number of grey
levels are determined by the required quality of the images. Recent data transmission and
storage techniques have significantly improved image transmission capabilities, including
transmission over the Internet [21].
3.4 MODELING:
For hybrid and digital imaging products, modeling is used to predict the performance the
entire system, including elements such as optics, image sensors, scanners, image processing
operations, printers, emissive displays, capture and display media and human visual
responses. In a digital camera, for example, modification of only one component -- such as
the lens, sensor, or color filter array -- can significantly impact image quality.
Here we want to present some of the applications of Image Processing in some fields where it
is applied like Robotics, Medical field and common uses [22]…
20
APPLICATIONS
Further information: Imaging
Computer vision
Face detection
Feature detection
Lane departure warning system
Non-photorealistic rendering
Medical image processing
Microscope image processing
Morphological image processing
Remote sensing
Automated Sieving Procedures
APPLICATION 1:
21
minimized to a great extent and thereby sophistication is increased. Hence image processing
is used here in the study of robotics [22].
APPLICATION 2:
In the field of Medicine this is highly applicable in areas like Medical imaging, Scanning,
Ultrasound and X-rays etc. Image Processing is rapidly used for MRI SCAN (Magnetic
Resonance Imaging) and CT SCAN (Computed Tomography). Tomography is an imaging
technique that generates an image of a thin cross sectional slice of a test piece.
Bone Scan Chest X-Ray and Baby Scan and MRI SCAN of
ADVANTAGES:
In medicine by using the Image Processing techniques the sophistication has
increased. This lead to technological advancement.
Vision System0s are flexible, inexpensive, powerful tools that can be used with
ease.
In Space Exploration the robots play vital role which in turn use the image processing
techniques.
Image Processing is used for Astronomical Observations.
22
Also used in Remote Sensing, Geological Surveys for detecting mineral resources etc.
Also used for character recognizing techniques, inspection for abnormalities in
industries.
DISADVANTAGES:
A Person needs knowledge in many fields to develop an application / or part of an
application using image processing.
Calculations and computations are difficult and complicated so needs an expert in the
field related. Hence it’s unsuitable and unbeneficial to ordinary programmers with
mediocre knowledge.
23
4. MEDICAL IMAGING
Medical imaging refers to the techniques and processes used to create images of the
human body (or parts and function thereof) for clinical purposes or medical science
(including the study of normal anatomy and physiology).
Mammography:
X-ray mammography is one of the most challenging areas in medical imaging. It is
used to distinguish subtle differences in tissue type and detect very small objects, while
minimizing the absorbed dose to the breast. Since the various tissues comprising the breast
are radiologically similar, the dynamic range of mammograms is low. Special x-ray tubes
capable of operating at low tube voltages (25–30 kV) are used, because the attenuation of x-
rays by matter is greater and predominantly by photoelectric absorption at small x-ray
energies, resulting in a larger difference in attenuation between similar soft tissues and,
24
therefore, better subject contrast. However, the choice of x-ray energy is a compromise: too
low an energy results in insufficient penetration with more of the photons being absorbed in
the breast, resulting in a higher dose to the patient. Most modern x-ray units use molybdenum
targets, instead of the usual tungsten targets, to obtain an x-ray output with the majority of
photons in the 15–20 keV range. In order to detect micro calcifications, with diameters that
can be less than 0.1 mm, the spatial resolution of the imaging system needs to be optimized.
The target within the x-ray tube is angled so as to produce a small focal spot size (0.1–0.3
mm), and large focal spot-to-film distances (45–80 cm) reduce the effects of geometric
sharpness. Compression of the breast, normally to about 4 cm in thickness, reduces x-ray
scatter and ensures a more uniform exposure. Immobilization allows a shorter exposure time
which minimizes motion blurring. In film mammography, single-emulsion film, without an
intensifier screen, is used to minimize the detector contribution to unsharpness [23].
25
this exponential attenuation into account by subtracting, pixel by pixel, the logarithm of the
respective images: hence the log amplifier in Fig. 16.
26
Fig .17
[23]
This pixel shifting tends to be a trial-and-error process, involving a combination of shifts in
different directions and by differing amounts. Motion artifacts can be a significant problem in
cardiac studies, resulting from the involuntary motion of the soft tissues.
27
Fig .18
[23]
CT imaging:
CT imaging is the primary digital technique for imaging the chest, lungs, abdomen and bones
due to its ability to combine fast data acquisition and high resolution, and is ideally suited to
three-dimensional reconstruction. It is particularly useful in the detection of pulmonary (i.e.
lung) disease, because the lungs are difficult to image using ultrasound and MRI. It is often
used to diagnose diffuse diseases of the lung such as emphysema, which involves a sticky
build-up of mucus in the lungs, and cystic fibrosis, which leads to irreversible dilation of the
airways [23].
Fig .19
[23]
28
Activity 3.3 uses a “stack” of images of a brain showing hydrocephalus, in which excessive
accumulation of cerebrospinal fluid CSF results in an abnormal dilation of the ventricles
(spaces) in the brain, causing potentially harmful pressure on the brain tissues. The user can
move through the stack to identify the slices which show enlarged ventricles.
PET imaging:
Positron emission tomography, PET, is the most recent nuclear medicine imaging technique:
in common with the others, it measures physiological function (e.g. perfusion, metabolism),
rather than gross anatomy. A small, positron-emitting radioisotope with a short half-life (such
as carbon-11, 11C (about 20 min), nitrogen-13, 13N (about 10 min), oxygen-15, 15O (about
2 min), and fluorine-18, 18F (about 110 min)) is incorporated into a metabolically active
molecule (such as glucose, water or ammonia), and injected into the patient. Such labeled
compounds are known as radiotracers. When a positron, i.e. a positively charged electron, is
emitted within a patient, it travels up to several millimetres while losing its kinetic energy.
When the slowly moving positron encounters an electron, they spontaneously disappear and
their rest masses are converted into two 511 keV annihilation (gamma ray) photons, which
propagate away from the annihilation site in opposite directions [23]
.
Fig .20
29
photon emission computed tomography, SPECT, in that two γ-ray photons are produced at
the same time. The output of detectors on opposite sides of the PET scanner is analyzed by a
coincidence detector, which only counts events that are simultaneous to within a user-set time
window (2–20 ns); this ensures that only the 511 keV photons are counted. Simultaneous
triggering reveals the line of sight of the two photons, and the original positron-emitting
radiopharmaceutical must be somewhere along that line. The intersection of many such lines
delineates the distribution of the pharmaceutical. PET images (Fig. 3.39) have higher signal-
to-noise ratio and better spatial resolution (2mm) than planar scintigraphy and SPECT
images. However, PET systems are much more expensive. Cyclotrons are required to
produce the short-lived positron-emitting isotopes, due to their short half-lives. Few hospitals
and universities are capable of maintaining such systems, and most clinical PET is supported
by third-party suppliers of radiotracers which can supply many sites simultaneously. This
limitation restricts clinical PET primarily to the use of radiotracers labeled with fluorine-18
(T½ ≈ 110 minutes), which can be transported a reasonable distance before use, or to
rubidium-82 (T½ ≈ 75 seconds), which can be created in a portable generator and is used for
myocardial perfusion studies. To facilitate the process of correlating structural and functional
information, scanners that combine x-ray CT and radionuclide imaging, either SPECT or
PET, have been developed. These dual-modality systems use separate detectors for x-ray and
radionuclide imaging, with the detectors integrated on a common gantry. Because the two
scans can be performed in immediate sequence during the same session, with the patient not
changing position between the two types of scans, the two sets of images are more precisely
registered. In the fused image the radionuclide distribution can be displayed in color on a
gray-scale CT image to co-register the anatomical and physiological features and thereby
improve evaluation of disease [23].
Doppler imaging:
Blood velocity measurements are essential in calculating cardiac output and diagnosing
stenosis, narrowing, of the arteries. The Doppler effect can be used to determine blood
velocity and interlace this information with B-mode scanning, as a so-called duplex scan.
The Doppler effect is familiar in the form of the increased frequency of a moving
sound source, such as a train whistle or police siren, as it approaches, and the reduced
frequency, as it passes by. The relative change in frequency, Δf/f, depends on the velocity
of the sound emitter, V, relative to the speed of sound in air, Vs. Thus:
30
where the ± refers to the sound source traveling towards (+) or away from (−) the receiver.
Fig .21
Fig .22
Doppler measurements are usually displayed as a time series of spectral Doppler plots
(Fig. 22); there is no spatial (i.e. depth) information.
Ultrasound:
There are a wide range of applications of ultrasound imaging as a result of its non-
invasive, non-ionizing nature, and its ability to form real-time axial and three-dimensional
images.
The tissues of interest need to reflect sufficient ultrasound energy; this limits the
method to soft tissues, fluids and small calcifications preferably close to the surface of the
body and unobstructed by bony structures.
Ultrasound is most commonly employed in examinations of the abdomen and pelvis.
31
In obstetrics, fetal head size and fetal length are used as measures of fetal maturity and health,
while spinal morphology can be used to detect the presence of abnormalities such as spinal
bifida. Doppler imaging can be used to measure fetal blood velocity and cardiac function
[23].
Fig .23
[23]
Ultrasound imaging can be used to complement x-ray mammography in the diagnosis of
breast cancer (Fig. 23). It can help determine whether a lump is a fluid-filled cyst or a solid
mass, and is particularly useful in women with dense breast tissue and with young women,
because their tissue is relatively opaque to x-rays.
32
Furthermore, MRI scanners are several times as costly as a CT scanner because of the
expensive superconducting magnet required.
33
5. Overview of implicit active contours
The implicit active contour, or level set, approach was introduced by Osher and Sethian11
and has since been enhanced by several authors.3{6 An easy-to-understand high-level
description of the level set method. The basic idea is to start with a closed curve in two
dimensions (or a surface in three dimensions) and allow the curve to move perpendicular to
itself at a prescribed speed. One way of describing this curve is by using an explicit
parametric form, which is the approach used in snakes. As mentioned earlier, this causes
difficulties when the curves have to undergo splitting or merging, during their evolution to
the desired shape. To address this difficulty, the implicit active contour approach, instead of
explicitly following the moving interface itself, takes the original interface and embeds it in
higher dimensional scalar function, defined over the entire image. The interface is now
represented implicitly as the zero-th level set (or contour) of this scalar function. Over the rest
of the image space, this level set function Ф is defined as the signed distance function from
the zero-th level set. Specially, given a closed curve C0, the function is zero if the pixel lies
on the curve itself, otherwise, it is the signed minimum distance from the pixel to the curve.
By convention, the distance is regarded as negative for pixels inside C0 and positive for
pixels outside C0. The function Ф, which varies with space and time (that is, Ф = Ф(x; y; t) in
two dimensions) is then evolved using a partial differential equation (PDE), containing terms
that are either hyperbolic or parabolic in nature. In order to illustrate the origins of this PDE,
we next consider the evolution of the function Ф as it evolves in a direction normal to itself
with a known speed F. Here, the normal is oriented with respect to an outside and an inside.
Since the evolving front is a zero level set (i.e., a contour with value 0) of this function, we
require (using a one-dimensional example) [24]
Ф(x(t),t) = 0
for any point x(t) on the zero level set at time t. Using the chain rule, we have
Ф + ΔФ( x(t), t) ¢ x(t) = 0
34
where ф(x, t = 0), that is, the curve at time t = 0, is given. This formulation enables us to
handle topological changes as the zero level set need not be a single curve, but can easily split
and/or merge as t advances. Last Equation can be solved using appropriate finite deference
approximations for the spatial and temporal derivatives13 and considering the image pixels to
be a discrete grid in the x ¡ y domain with uniform mesh spacing. In order to evolve the level
set, we need the specification of an initial closed curve(s), the initialization of the signed
distance function Ф over the rest of the image, the finite difference discretization of Eqn. (1),
and the prescription of the propagation speed F. We next discuss each of these issues in detail
[24].
The speed F depends on many factors including the local properties of the curve, such as the
curvature, and the global properties, such as the shape and the position of the front. It can be
used to control the front in several different ways. The original level set method proposed
using F as the sum of two terms
F = F0 + F1(k)
A key challenge in implicit active contours is the placement of the initial contour. Since the
contour moves either inward or outward, its initial placement will determine the segmentation
that is obtained. For example, if there is a single object in an image, an initial contour placed
outside the object and propagated inward will segment the outer boundary of the object.
However, if the object has a hole in the middle, it will not be possible to obtain the boundary
of this hole unless the initial contour is placed inside the hole and propagated outward. It
should be noted that more than one closed curve can be used for initialization of the zero-th
level set [24].
35
5.3 Calculation of the distance function
Once the initial contour has been determined, we need to calculate the signed distance
function Á, that is, the minimum distance from each pixel in the image to the prescribed
initial contour. This is done by solving the Eikonal equation. This is derived from the level
set formulation as follows. Suppose the speed function F is greater than zero. As the front
moves outward, one way to characterize the position of the front is to compute the arrival
time T(x; y) as it crosses each point (x; y). This arrival function is related to the speed by
|| ΔT || =1
where T = 0 is the initial contour. When the speed F depends only on the position, Equation)
is referred to as the Eikonal equation. The solution of this equation for a constant speed of
unity gives the distance function. The sign is attached to the function depending on the
location of each pixel relative to the original contour. In our work, we used the fast sweeping
method to solve the Eikonal equation as described by Zhao [24].
The finite difference approach essentially considers the discretized version of the image
I(x,y) to correspond to the intensity at the pixels (I, j), at locations (xi , yj ), where i = 1. . . . .
. . . N; and j = 1. . . . .M. The distance between the centers of the pixels, referred to as the grid
spacing, is h. The same inter-pixel distance h is used along the x and y dimensions. Following
the approach of Sethian [24].
36
5.5 Level Set Method
Level set method is a way to represent active contour. For a given image u0, we use a level
set function φ to describe the desired contour. The φ function has the same size with image
u0, which means each pixel of image u0 will have a corresponding φ value [24]. We define
the region where φ=0 as the contour C, and the φ>0 region as inside the contour and the φ<0
region as outside the contour, as shown in Fig. 24.
37
References
[1] B. Gold, C. M. Rader, Digital Processing of Signals, New York: McGraw-Hill, 1969.
[2]
https://www.rpi.edu/dept/phys/ScIT/InformationTransfer/sigtransfer/signalcharacteristics.html
[3] https://learn.sparkfun.com/tutorials/analog-vs-digital/all
[4] https://ieeexplore.ieee.org/document/826412
[5] Digital Signal Processing with Examples in MATLAB®, Second Edition
[6] J. S. Small, "General-Purpose Electronic Analog Computing: 1945–1965", IEEE Annals
of the History of Computing, vol. 15, no. 2, pp. 8-18, 1993.
[7] Α. V. Oppenheim, R. W. Schafer, Digital Signal Processing, New Jersey:Prentice-Hall,
1975
[8] A. Arbel, "Transistorized analogue to digital converter", The Nuclear Electronics
Conference Organized by the International Atomic Energy Agency, 1961-May
[9] S. Westermann, R.E. Sandlin, "Digital Signal Processing: Benefits and
Expectations", Supplement To The Hearing Review, vol. 2, pp. 56-59, 1997
[10] R. H. Walden, "Analog-to-digital converter survey and analysis", IEEE J. Select. Areas
Commun., vol. 17, no. 4, pp. 539-550, Apr. 1999.
[11] https://www.polytechnichub.com/mean-adc-analog-digital-converter/
[12] D. S. Ruchkin, "Linear reconstruction of quantized and sampled random signals", IRE
Trans. Communications Systems, vol. CS-9, pp. 350-355, December 1961.
[13] https://www.slideserve.com/octavia/survey-of-quantization
[14] W. R. Bennett, "Spectrum of quantized signals", Bell Syst. Tech. J., vol. 27, pp. 446-
472, July 1948.
[15] C. E. Shannon, "Coding theorems for a discrete source with a fidelity criterion" in
Information and Decision Processes, New York:McGraw-Hill, pp. 93, 1960.
[16]
https://www.tutorialspoint.com/linear_integrated_circuits_applications/linear_integrated_circu
its_applications_digital_to_analog_converters.htm
[17] http://dspforyou.blogspot.com/2012/08/image-processing-types-of-images.html
[18] https://www.mathsisfun.com/data/analog-digital.html
[19] digital image processing - rafael c. gonzalez and richard e. woods, addison wesley
1993.
[20] F. Meyer, An overview of morphological segmentation. International Journal of Pattern
Recognition and Artificial Intelligence, vol. 15, no. 7, pp. 1089-1118, 2001.
[21] https://www.living-democracy.rs/en/textbooks/volume-1/part-2/unit-4/chapter-2/lesson-1-
2/
[22] INTRODUCTION TO ROBOTICS, ANALYSIS, SYSTEMS, APPLICATIONS - SAEED
B. NIKU
[23] https://typeset.io/formats/ieee/ieee-transactions-on-medical-
imaging/333e3b8eea8c685d9bfd378530d469ff
[24] A. Levinshtein, C. Sminchisescu, S. J. Dickinson, "Optimal Contour Closure by
Superpixel Grouping," In: Proceeding of the 11th European Conference on Computer Vision.
Heraklion, Crete, Greece, 2010. Part 2. pp. 480-493
38