Sei sulla pagina 1di 38

Faculty of Engineering

Electronics & Communication Deptarment

High definition image processing

Submitted by: Mai Mohamed Abdelgelil Amin


Supervised by: Prof. Dr. Mazhar Basiouni Tayel
Submitted on: 24/12/2019

1
Contents
1. What is signal and system? ................................................................................................................. 3
1.1 Representation of continious time (Analog) sinusodial signal: .................................................... 3
1.1.1 Properties of Analog Signal ................................................................................................... 4
1.2 Representation of discrete time (Digital) sinusodial signal: ......................................................... 4
1.2.1 Properties of descrete time (Digital) sinusodial signal: ......................................................... 5
2. Signal Processing: ............................................................................................................................... 5
2.1 Basic Elements of a Signal Processing System: ........................................................................... 6
2.1.1 Analog Signal Processing ...................................................................................................... 6
2.2.2 Digital Signal Processing: ...................................................................................................... 6
3. Images and Pictures: ......................................................................................................................... 11
3.1 Type of Images: .......................................................................................................................... 11
3.2 Image Processing ........................................................................................................................ 12
3.2.1 Analog image processing: .................................................................................................... 12
3.2.2 Digital image processing:..................................................................................................... 12
3.2.3. INDEX TERMS .................................................................................................................. 14
3.3 ANALYSIS of Image Processing ............................................................................................... 17
3.4 MODELING: .............................................................................................................................. 20
APPLICATIONS .............................................................................................................................. 21
ADVANTAGES: .............................................................................................................................. 22
DISADVANTAGES: ....................................................................................................................... 23
4. MEDICAL IMAGING...................................................................................................................... 24
4.1 Medical imaging modalities ........................................................................................................ 24
5. Overview of implicit active contours ............................................................................................... 34
5.1 Choice of the speed function ..................................................................................................... 35
5.2 The placement of the initial contour ........................................................................................ 35
5.3 Calculation of the distance function.......................................................................................... 36
5.4 The discretization of the level set equation ................................................................................. 36
5.5 Level Set Method ....................................................................................................................... 37
References ………………………………………………………………………………………………………………………………………38

2
1. What is signal and system?
Signal: A signal is a physical quantity that varies with time, space or any other independent
variable by which information can be conveyed. A signal is an electric current or
electromagnetic field used to convey data from one place to another.

There are two types of signal: 1). Analog Signal 2). Digital Signal

Continuous-time signals or Analog Signals: A signal that varies continuously with time is
called continuous-time signal. These are defined for every value of independent variable,
namely, time. For example speech signal and temperature of the room are continuous-time
signals [1].

Discrete-time Signals or Digital Signals: Discrete-time signals are signals which are
defined at discrete times. These are represented by sequences of numbers. For example: Rail
traffic signal is a discrete-time signal. Discrete-time signals can be recovered by periodic
sampling and quantization of continuous-time signals [1]..

Fig. 1 graphical representation of a discrete- time signal [2].

Continuous-Valued Signals: If a signal takes on all possible values on a finite or an infinite


range, it is said to be continuous-valued signal. For example: A sine wave signal takes on
values on an infinite range and square wave signal takes on a values on a finite range 0 and 1.

Discrete-Valued Signals: If the signal takes on values from a finite set of possible values, it
is said to be discrete-valued signal. Usually, these values are equidistant and hence can be
expressed as an integer multiple of the distance between two successive values.

1.1 Representation of continious time (Analog) sinusodial signal:


As in [3].A simple harmonic oscillation is mathematically describe as:
3
𝑥𝑎 (𝑡) = 𝐴 𝑐𝑜𝑠 (Ω𝑡 + 𝜃), −∞ < 𝑡 < ∞ (1)

The subscription a used with x(t)denoted as analog signal. This signal is completely
characterize by three parameters: A is the amplitude of the sinusoid, Ω is the frequency and 𝜃
is the phase.

We often use the frequency Ω = 2πF


In term of F we can be written as :
𝑥𝑎 (𝑡) = 𝐴 𝑐𝑜𝑠 (2πFt + 𝜃), −∞ < 𝑡 < ∞ (2)

Using the both form we represent the sinusoidal signals:

fig.2 representation of analog sinusoidal signal [4]

1.1.1 Properties of Analog Signal

1. For every fixed value of the frequency F, xa(t)preidic


1
𝑥𝑎 (𝑡 + 𝑇𝑝 ) = 𝑥𝑎 where 𝑇𝑝 = 𝐹 the functional periodic of the signal

2. Continous time sinusoidal signals with different frequencies arethemselves distinct


3. Increasint the frequency F results in an increase in the rate of oscillation of the signal.

1.2 Representation of discrete time (Digital) sinusodial signal:


As in [5], Discrete time (Digital) sinusodial signal may be expressed as

𝑥(𝑛) = 𝐴 𝑐𝑜𝑠 (ωn + 𝜃), −∞ < 𝑛 < ∞ (3)

If instead of ω we used the frequency variable f defined by ω ≡ 2πf

4
So the relation becomes

𝑥(𝑛) = 𝐴 𝑐𝑜𝑠 (2πfn + 𝜃), −∞ < 𝑛 < ∞ (4)

we represent the sinusoidal signals:

fig.3 representation of analog sinusoidal signal [6]

1.2.1 Properties of descrete time (Digital) sinusodial signal:

1. A discrete time sinsuidal is periadical only if its frequency f is a rationalnumber.


X(n+N) = x(n) for all n
2. Discrete time sinusoids whose frequencies areseperated by an integer multiple of 2π
are indentical.
3. The highest rate of oscillation in a descrete-time sinusoid is attained when ω =
π(or ω = −π) or equivalently , f= ½ (or f=1/2)

System: A system may also be defined as a physical device that performs an operation on a
signal. System is any physical set of components that takes a signal, and produces a signal.

2. Signal Processing:
Signal processing is the science of analysing, synthesizing, sampling, encoding, transforming,
decoding, enhancing, transporting, archiving, and in general manipulating signals in some
way [7].

Processing of signal can be classified in two classes:

1. Analog Signal Processing


2. Digital Signal Processing

5
2.1 Basic Elements of a Signal Processing System:
2.1.1 Analog Signal Processing:
Generally most of the signal encountered in science and engineering are analog in nature. It
means the signals are functions of a continuous variable such as time or space, and usually
take one values in a continuous range.

Fig.4 analog signal processing [8]

These signals may be processed directly by selecting appropriate analog system for the
objective of changing their characteristics. In this case we say that the signal has been
processed directly in its analog form. The input signals as well as output signal are in analog
form [8].

2.2.2 Digital Signal Processing:

Digital Signal processing is an alternative method for processing the analog signal as shown
below. For performing the processing digitally there is a need for an interface between the
analog signal and the digital processor. This interface is named as analog to digital converter.
The digital signal processor may be either a large programmable digital computer or a small
microprocessor programmed for performing the desired operation on the digital input signal
[9].

Fig.5 analog signal processing [9]

6
To perform the processing digitally, an interface known as an analog-to-digital (A/D)
converter is place between the analog signal and the digital signal processor. The output of
the A/D converter is a digital signal which serves an input to the digital processor.

The digital signal processor may be a large programmable digital computer or a small
microprocessor programmed to perform the desired operations on the input signal. Also, it
can be a hardwired digital processor configured to perform a specified set of operations on
the input signal. Programmable machines are used to change the signal processing operations
through a change in the software, since hardwired machines are difficult to recon figure.
However, in some cases, where the digital output from the digital signal processor is to be
given to the user in analog form, such as speech communications, we must use another
interface known as digital –to-Analog (D/A) Converter, which convert the signal from digital
domain to analog domain. Thus, the signal is provided to the user in analog form [9].

Although, in some signal analysis, the desired information could be in digital form. In
such case, the D/A converter is not necessary.

2.2.2.1 Analog to Digital Conversion


To process analog signals by digital means, it is first necessary to convert them into digital
form, i.e., to a sequence of numbers having finite precision. The procedure is called analog to
digital conversion.
To process the analog signals by digital means, it is first necessary to convert them into
digital form. That is to convert them to a sequence of numbers. This procedure is called
Analog-to-Digital Conversion (A/D) and the corresponding devices which perform this are
called Analog-to-Digital Converters (ADCs). The Analog signal is simply a continuous signal
which is continuous both in terms of time and amplitude, i.e, it has a continuous range of
values both on time axis and in amplitude axis, it is simply called as Continuous Time
Continuous Value (CTCV) signal. Whereas on the other hand Digital signal is discretized
both in time and amplitude and the corresponding amplitudes are represented in a sequence of
binary digits. In order to do this Analog to Digital Conversion it is necessary to discretize
both the time and amplitude axis and this process is done by first doing on the time axis and
then moving to the amplitude axis [10] . So this entire process is done in three stages namely
Sampling, Quantization and Encoding and block diagram of the A/D process and
corresponding waveforms of the continuous, discrete and digital signals are shown in fig. 6.

7
Continuous time Discrete time Quantized signal
digital data
signal xa(t) signal x(nT) xq(n)

Sampler Quantizer Encoder

011001100…

Fig. 6: Block diagram of Analog-to-Digital Conversion [11]

 Sampling: This is the conversion of continuous time signal into a discrete time signal
by taking samples of the continuous time signal at discrete time instants.
Sampling is the reduction of a continuous signal to a discrete signal. A common
example is the conversion of a sound wave (a continuous signal) to a sequence of
samples (a discrete-time signal) [12].

Fig.7 : The continuous signal is represented with a green colored line while the
discrete samples are indicated by the blue vertical lines [13].

8
A sample is a value or set of values at a point in time and/or space. A sampler is a
subsystem or operation that extracts samples from a continuous signal. A theoretical
ideal sampler produces samples equivalent to the instantaneous value of the
continuous signal at the desired points.
In practice, the continuous signal is sampled using an analog-to-digital converter
(ADC), a device with various physical limitations. This result in deviations from the
theoretically perfect reconstruction, collectively referred to as distortion .
Sampling Theorem: If the highest frequency contained in an analog signal y(t) is
Fmax = B and the signal is sampled at a rate FS > 2Fmax = 2B, then y(t) can be exactly
recovered from its sample values using the interpolation function

(5)

Thus y(t) may be expressed as follows

(6)

where y(n)'s are samples of y(t).

The sampling rate FS = 2B = 2Fmax is called the Nyquist rate.

 Quantization: This is the conversion of discrete time continuous valued signal into
discrete time discrete valued signal. The value of each signal sample is represented by a
value selected from a finite set of possible values. The difference between non-quantized
sample and quantized output is known as quantization error.
Quantization error: In analog-to-digital conversion, the difference between the actual
analog value and quantized digital value is called quantization error or quantization
distortion. The difference between non-quantized sample and quantized output is known
as quantization error [14]

 Coding: Each discrete value is represented by a b-bit binary sequence.


The purpose of the encoder is to represent the value of every quantized sample in the form
of b-bit binary code. where ‘b’ is the number of bits present in the binary equivalent.
Simply, Encoding is the process of assigning a binary sequence to every sampled value of

9
the quantized signal. There are a variety of ways to do the encoding process and they are
called as coding techniques. All these coding techniques aim at reducing the redundancy
of the data. Where redundancy is nothing but the repetition of data which increases the
length of code and ultimately increases the memory required. So if we remove the
redundant data then we can represent the same data with less number of bits. Some of the
coding methods are Huffman coding, Shannon-Fano coding, Arithmetic coding etc.,
After the Encoding is done, the entire analog signal is now can be represented as a
sequence of binary bits, which is nothing but the signal in digital form [15].
Improvement of Accuracy in ADC
Two important methods are used for improving the accuracy in ADC. They are by increasing
the resolution and by increasing the sampling rate. This is shown in figure below (fig.8).
a digital-to-analog converter (DAC) is a device for converting a digital (usually binary)
code to an analog signal (current, voltage or charges).DACs are the interface between the
abstract digital world and the analog real life. Simple switches, a network of resistors, current
sources or capacitors may implement this conversion. An ADC performs the reverse
operation [15].

0
1
0
1
1 0 1 01 1
0 0 0 10 0
0 1 0 11 1
1 1 1 10 1
DAC
Fig.8 Digital to analog conversion [16]

10
3. Images and Pictures:
Human beings are predominantly visual creatures. We not only look at things to identify and
classify them, but we can scan for differences, and obtain an overall rough feeling for a scene
with a quick glance.

Humans have evolved very precise visual skills: we can identify a face in an instant; we can
differentiate colours; we can process a large amount of visual information very quickly.

An image is a single picture which represents something. It may be a picture of a person, of


people or animals, or of an outdoor scene, or a microphotograph of an electronic component,
or the result of medical imaging.

3.1 Type of Images:


(i) Binary: Each pixel is just black or white. Since there are only two possible values for each
pixel, we only need one bit per pixel. Such images can therefore be very efficient in terms of
storage. Images for which a binary representation may be suitable include text (printed or
handwritten), fingerprints, or architectural plans [17].

(ii) Grey scale: Each pixel is a shade of grey, normally from 0 (black) to 255 (white). This
range means that each pixel can be represented by eight bits [17]..

(ii) RGB (or True) Images: Here each pixel has a particular colour; that colour being
described by the amount of red, green and blue in it. If each of these components has a range
0 to 255, this gives a total of 2553 = 1, 67, 77,216 different possible colours in the image.
This is enough colours for any image. Since the total number of bits required for each pixel is
24, such images are also called 24-bit colour images. Such an image may be considered as
consisting of a stack of three matrices; representing the red, green and blue values for each
pixel. This means that for every pixel there correspond three values [17]..

(iv) Indexed: Most colour images only have a small subset of the more than sixteen million
possible colours. For convenience of storage and file handling, the image has an associated
colour map, or colour palette, which is simply a list of all the colours used in that image.
Each pixel has value which does not give its colour (as for an RGB image), but an index to
the colour in the map. It is convenient if an image has 256 colours or less, for then the index
values will only require one byte each to store [17]..

11
3.2 Image Processing
Image processing pertains to the alteration and analysis of pictorial information. Common
case of image processing is the adjustment of brightness and contrast controls on a television
set by doing this we enhance the image until its subjective appearing to us is most appealing.
The biological system (eye, brain) receives, enhances, and dissects analyzes and stores
images at enormous rates of speed.
Basically there are two-methods for processing pictorial information. They are:
I. Optical processing
II. Electronic processing.
Optical processing uses an arrangement of optics or lenses to carry out the process. An
important form of optical image processing is found in the photographic dark room.
Electronic image processing is further classified as:
(i). Analog processing
(ii). Digital processing.

3.2.1 Analog image processing:

Analog iamge processing is the control of brightness and contrast of television image. The
television signal is a voltage level that varies In amplitude to represent brightness throughout
the image by electrically altering these signals , we correspondingly alter the final displayed
image appearance [9] .

3.2.2 Digital image processing:

3.2.2.1 Digital image

The first computer-generated digital images were produced in the early 1960s, alongside
development of the space program and in medical research. A digital image is a
representation of a two-dimensional image using ones and zeros (binary). Depending on
whether or not the image resolution is fixed, it may be of vector or raster type. Without
qualifications, the term "digital image" usually refers to raster images also called bitmap
images.

12
Fig. 9: Analog and Digital image [18]

3.2.2.2 Digital image processing

Digital image processing is the use of computer algorithms to perform image processing on
digital images. As a subcategory or field of digital signal processing, digital image processing
has many advantages over analog image processing. It allows a much wider range of
algorithms to be applied to the input data and can avoid problems such as the build-up of
noise and signal distortion during processing. Since images are defined over two dimensions
(perhaps more) digital image processing may be modeled in the form of Multidimensional
Systems [9]. Digital image processing allows the use of much more complex algorithms for
image processing, and hence, can offer both more sophisticated performance at simple tasks,
and the implementation of methods which would be impossible by analog means.In
particular, digital image processing is the only practical technology for:

 Classification
 Feature extraction
 Pattern recognition
 Projection
 Multi-scale signal analysis

Processing of digital images by means of digital computer refers to digital image processing.
Digital images are composed of finite number of elements of which has a particular location
value. Picture elements, image elements, and pixels are used as elements for digital image
processing [9].

Digital Image Processing is concerned with processing of an image. In simple words an


image is a representation of a real scene, either in black and white or in color, and either in

13
print form or in a digital form i.e., (technically an image is a two-dimensional light intensity
function. In other words it is a data intensity values arranged in a two dimensional form, the
required property of an image can be extracted from processing an image. Image is typically
by stochastic models. It is represented by AR model. Degradation is represented by MA
model.

Other form is orthogonal series expansion. Image processing system is typically non-casual
system. Image processing is two dimensional signal processing. Due to linearity Property, we
can operate on rows and columns separately. Image processing is vastly being implemented
by “Vision Systems” in robotics. Robots are designed, and meant, to be controlled by a
computer or similar devices. While “Vision Systems” are most sophisticated sensors used in
Robotics. They relate the function of a robot to its environment as all other sensors do.

“Vision Systems” may be used for a variety of applications, including manufacturing,


navigation and surveillance. Some of the applications of Image Processing are:

1. Robotics. 3. Graphics and Animations.


2. Medical Field. 4. Satellite Imaging.

3.2.3. INDEX TERMS

A. IMAGE PROCESSING:
Image processing is a subclass of signal processing concerned specifically with
Pictures. Improve image quality for human perception and/or computer interpretation. Image
Enhancement

To bring out detail is obscured, or simply to highlight certain features of interest in an


image [21].

Fig. 10 quality of image [21]

14
B. IMAGE RESTORATION:
Improving the appearance of an image tend to be based on mathematical or
probabilistic models of image degradation>

Example:

DISTORTED IMAGE RESTORTED IMAGE

Fig. 11 IMAGE RESTORATION [21]

C. COLOR IMAGE PROCESSING


Gaining in importance because of the significant increase in the use of digital images
over the Internet [21].

D. WAVELETS
Foundation for representing images in various degrees of resolution. Used in image
data compression and pyramidal representation (images are subdivided successively into
smaller regions) [21]

E. COMPRESSION

Reducing the storage required to save an image or the bandwidth required to


transmit it. Ex. JPEG (Joint Photographic Experts Group) image compression standard [21].

F. MORPHOLOGICAL PROCESSING
Tools for extracting image components that are useful in the representation and
description of shape [21].

15
Fig. 12 MORPHOLOGICAL PROCESSING [21]

G. IMAGE SEGMENTATION
The main purpose of medical image segmentation is to find a collection of high correlated
parts or tissues, so that the computers or doctors can do further analysis on each part. Because
of some properties of medical image, such as low image contrast, noise, and diffuse
boundary, medical image segmentation often face some difficult challenges. Image
segmentation is a traditional issue in image processing. There have been many researches on
this topic, and many methods have been proposed such as zero crossing, thresholding , region
based segmentation , watershed , and level set method. Some of the above methods are
gradient-based and are vulnerable to weak edges. Some of methods are intensity-based and
are vulnerable to noises. But the medical images have both of these two properties that we
need to conquer. In this paper, we use the level set method incorporated with some
mechanisms to do medical image segmentation because of its some advantages which will be
introduced in the next section [20].

Figure 13: Result of image segmentation.

16
3.3 ANALYSIS of Image Processing
The following is the overall view and analysis of Image Processing.

A. IMAGE PROCESSING TECHNIQUES:

Image Processing techniques are used to enhance, improve, or otherwise alter an


image and to prepare it for image analysis. Usually, during image processing information is
not extracted from the image. The intention is to remove faults, trivial information, or
information that may be important, but not useful, and to improve the image.
Image processing is divided into many sub processes, including Histogram Analysis,
Thresholding, Masking, Edge Detection, Segmentation, and others.
B. STAGES IN IMAGE PROCESSING:

a solution

a problem Recognition and


Interpretation

Knowledge Base
Image Acquisition
IM

Representation
and Description

Pre processing

Segmentation

Fig. 14 STAGES IN IMAGE PROCESSING [21]

17
I. IMAGE ACQUISITION:

An image is captured by a sensor (such as a monochrome or color TV camera) and


digitized. If the output of the camera or sensor is not already in digital form, an analog-to
digital converter digitizes it.

For example, cancer cell detection in images:


The first stage starts with taking a collection of CT scan images from the Database (ACSC).
Images are stored in MATLAB and displayed as a gray scale image. The lung CT images
having low noise when compared to scan image and MRI image. So we can take the CT
images for detecting the lungs. The main advantage of the computer tomography image
having better clarity, low noise and distortion For the experimental purpose 10 male images
are examined his CT scans were stored in database of images in JPEG/PNG image standards
[21].

II. RECOGNITION AND INTERPRETATION:

Recognition is the process that assigns a label to an object based on the information
provided by its descriptors. Interpretation is assigning meaning to an ensemble of recognized
objects.
III. SEGMENTATION:

Segmentation is the generic name for a number of different techniques that divide the
image into segments of its constituents. The purpose of segmentation is to separate the
information contained in the image into smaller entities that can be used for other purposes
[21].

IV. REPRESENTATION AND DESCRIPTION:

Representation and Description transforms raw data into a form suitable for the
Recognition processing [21].

V. KNOWLEDGE BASE:

A problem domain detailing the regions of an image where the information of interest
is known to be located is known as knowledge base. It helps to limit the search [21].

VI. THRESHOLDING:

18
Thresholding is the process of dividing an image into different portions by picking a
certain grayness level as a threshold, comparing each pixel value with the threshold, and then
assigning the pixel to the different portions, depending on whether the pixel’s grayness level
is below the threshold or above the threshold value. Thresholding can be performed either at
a single level or at multiple levels, in which the image is processed by dividing it into ”
layers”, each with a selected threshold [21].

Various techniques are available to choose an appropriate threshold ranging from


simple routines for binary images to sophisticated techniques for complicated images.

VII. CONNECTIVITY:

Sometimes we need to decide whether neighbouring pixels are somehow “connected”


or related to each other. Connectivity establishes whether they have the same property, such
as being of the same region, coming from the same object, having a similar texture, etc. To
establish the connectivity of neighbouring pixels, we first have to decide upon a connectivity
path [21].

VIII. NOISE REDUCTION:

Like other signal processing mediums, Vision Systems contains noises. Some noises
are systematic and come from dirty lenses, faulty electronic components, bad memory chips
and low resolution. Others are random and are caused by environmental effects or bad
lighting. The net effect is a corrupted image that needs to be pre-processed to reduce or
eliminate the noise. In addition, sometimes images are not of good quality, due to both
hardware and software inadequacies; thus, they have to be enhanced and improved before
other analysis can be performed on them [21].

IX. CONVOLUTION MASKS:

A mask may be used for many different purposes, including filtering operations and
noise reduction. Noise and Edges produces higher frequencies in the spectrum of a signal. It
is possible to create masks that behave like a low pass filter, such that higher frequencies of
an image are attenuated while the lower frequencies are not changed very much. There by the
noise is reduced [21].

X. EDGE DETECTION:

19
Edge Detection is a general name for a class of routines and techniques that operate
on an image and results in a line drawing of the image. The lines represented changes in
values such as cross sections of planes, intersections of planes, textures, lines, and colors, as
well as differences in shading and textures. Some techniques are mathematically oriented,
some are heuristic, and some are descriptive. All generally operate on the differences
between the gray levels of pixels or groups of pixels through masks or thresholds. The final
result is a line drawing or similar representation that requires much less memory to be stored,
is much simpler to be processed, and saves in computation and storage costs. Edge detection
is also necessary in subsequent process, such as segmentation and object recognition. Without
edge detection, it may be impossible to find overlapping parts, to calculate features such as a
diameter and an area or to determine parts by region growing[21].

XI. IMAGE DATA COMPRESSION:

Electronic images contain large amounts of information and thus require data
transmission lines with large bandwidth capacity. The requirements for the temporal and
spatial resolution of an image, the number of images per second, and the number of grey
levels are determined by the required quality of the images. Recent data transmission and
storage techniques have significantly improved image transmission capabilities, including
transmission over the Internet [21].

3.4 MODELING:

For hybrid and digital imaging products, modeling is used to predict the performance the
entire system, including elements such as optics, image sensors, scanners, image processing
operations, printers, emissive displays, capture and display media and human visual
responses. In a digital camera, for example, modification of only one component -- such as
the lens, sensor, or color filter array -- can significantly impact image quality.

Here we want to present some of the applications of Image Processing in some fields where it
is applied like Robotics, Medical field and common uses [22]…

20
APPLICATIONS
Further information: Imaging

 Computer vision
 Face detection
 Feature detection
 Lane departure warning system
 Non-photorealistic rendering
 Medical image processing
 Microscope image processing
 Morphological image processing
 Remote sensing
 Automated Sieving Procedures

APPLICATION 1:

Image Processing is vastly being implemented in Vision Systems in Robotics. Robots


capture the real time images using cameras and process them to fulfil the desired action. A
simple application in robotics using Vision Systems is a robot hand-eye coordination system.
Consider that the robot’s task is to move an object from one point to another point. Here the
robots are fixed with cameras to view the object, which is to be moved. The hand of the robot
and the object that is to be captured are observed by the cameras, which are fixed to the robot
in position, this real time image is processed by the image processing techniques to get the
actual distance between the hand and the object. Here the base wheel of the robot’s hand is
rotated through an angle, which is proportional to the actual distance between hand and the
object. Here a point in the target is obtained by using the Edge Detection Technique. The
operation to be performed is controlled by the micro-controller, which is connected to the
ports of the fingers of the robot’s hand. Using the software programs the operations to be
performed are assigned keys from the keyboard. By pressing the relative key on the keyboard
the hand moves appropriately.
Here the usage of sensors/cameras and Edge Detection technique are related to Image
Processing and Vision Systems. By this technique the complexity of using manual sensors is

21
minimized to a great extent and thereby sophistication is increased. Hence image processing
is used here in the study of robotics [22].

APPLICATION 2:

In the field of Medicine this is highly applicable in areas like Medical imaging, Scanning,
Ultrasound and X-rays etc. Image Processing is rapidly used for MRI SCAN (Magnetic
Resonance Imaging) and CT SCAN (Computed Tomography). Tomography is an imaging
technique that generates an image of a thin cross sectional slice of a test piece.

Bone Scan Chest X-Ray and Baby Scan and MRI SCAN of

Aortic angiogram Thyroids Knee

ADVANTAGES:
 In medicine by using the Image Processing techniques the sophistication has
increased. This lead to technological advancement.
 Vision System0s are flexible, inexpensive, powerful tools that can be used with
ease.
 In Space Exploration the robots play vital role which in turn use the image processing
techniques.
 Image Processing is used for Astronomical Observations.

22
 Also used in Remote Sensing, Geological Surveys for detecting mineral resources etc.
 Also used for character recognizing techniques, inspection for abnormalities in
industries.

DISADVANTAGES:
 A Person needs knowledge in many fields to develop an application / or part of an
application using image processing.
 Calculations and computations are difficult and complicated so needs an expert in the
field related. Hence it’s unsuitable and unbeneficial to ordinary programmers with
mediocre knowledge.

23
4. MEDICAL IMAGING

Medical imaging refers to the techniques and processes used to create images of the
human body (or parts and function thereof) for clinical purposes or medical science
(including the study of normal anatomy and physiology).

As a discipline and in its widest sense, it is part of biological imaging and


incorporates radiology (in the wider sense), nuclear medicine, investigative radiological
sciences, endoscopy, (medical) thermography, medical photography and microscopy (e.g. for
human pathological investigations). Measurement and recording techniques which are not
primarily designed to produce images, such as electroencephalography (EEG), magneto
encephalography (MEG), Electrocardiography (ECG) and others, but which produce data
susceptible to be represented as maps (i.e. containing positional information), can be seen as
forms of medical imaging [23].

4.1 Medical imaging modalities


Medical imaging systems detect different physical signals arising from a patient and
produce images. An imaging modality is an imaging system which uses a particular
technique.
Some of these modalities use ionizing radiation, radiation with sufficient energy to
ionize atoms and molecules within the body, and others use non-ionizing radiation. Ionizing
radiation in medical imaging comprises x-rays and γ-rays, both of which need to be used
prudently to avoid causing serious damage to the body and to its genetic material. Non-
ionizing radiation, on the other hand, does not have the potential to damage the body directly
and the risks associated with its use are considered to be very low. Examples of such
radiation are ultrasound, i.e. high-frequency sound, and radiofrequency waves [23].

Mammography:
X-ray mammography is one of the most challenging areas in medical imaging. It is
used to distinguish subtle differences in tissue type and detect very small objects, while
minimizing the absorbed dose to the breast. Since the various tissues comprising the breast
are radiologically similar, the dynamic range of mammograms is low. Special x-ray tubes
capable of operating at low tube voltages (25–30 kV) are used, because the attenuation of x-
rays by matter is greater and predominantly by photoelectric absorption at small x-ray
energies, resulting in a larger difference in attenuation between similar soft tissues and,

24
therefore, better subject contrast. However, the choice of x-ray energy is a compromise: too
low an energy results in insufficient penetration with more of the photons being absorbed in
the breast, resulting in a higher dose to the patient. Most modern x-ray units use molybdenum
targets, instead of the usual tungsten targets, to obtain an x-ray output with the majority of
photons in the 15–20 keV range. In order to detect micro calcifications, with diameters that
can be less than 0.1 mm, the spatial resolution of the imaging system needs to be optimized.
The target within the x-ray tube is angled so as to produce a small focal spot size (0.1–0.3
mm), and large focal spot-to-film distances (45–80 cm) reduce the effects of geometric
sharpness. Compression of the breast, normally to about 4 cm in thickness, reduces x-ray
scatter and ensures a more uniform exposure. Immobilization allows a shorter exposure time
which minimizes motion blurring. In film mammography, single-emulsion film, without an
intensifier screen, is used to minimize the detector contribution to unsharpness [23].

Fig.15 normal and up normal breast [23]


The contrast of the fluoroscopic image is limited by x-ray scatter in the patient, which can be
minimized by using smaller fields of view to limit the volume involved and by using anti-
scatter grids. If the fluoroscopic images are digitized, or if a CCD video camera is used, then
contrast can be adjusted within the computer system. The acquisition of digital fluoroscopic
images can be combined with injection of contrast material and real-time subtraction of pre-
and post-contrast images to perform examinations that are generally referred to as digital
subtraction angiography, DSA (Fig. 16). The result is an image of only the contrast material-
filled vessels (Fig. 17). Since the images were formed by detection of x-rays that had been
attenuated exponentially in the body, subtraction of pre- and post-contrast images must take

25
this exponential attenuation into account by subtracting, pixel by pixel, the logarithm of the
respective images: hence the log amplifier in Fig. 16.

Fig. 16 log amplifier


The process of subtracting two images has the unfortunate consequence of producing a
noisier subtracted image.
Consider subtracting two corresponding pixels:
 one from the mask (pre-contrast) image, resulting from a signal of 10 000 (±100)
photons
 one from the live (post-contrast) image, resulting from a signal of 9900 (±100)
photons.
The subtracted image has a pixel value corresponding to 100 ± 141 photons, i.e. the pixels are
subtracted, but the noise adds as the square root of the sum of the squares of the amplitudes.
The initial two images with signal-to-noise ratios of 100 and 99, respectively, result in a
subtracted image with a signal-to-noise ratio of only 0.7! In the normal course of events this
would be unacceptable. However, subtracted angiographic images, although noisy, are useful
because they make the small differences between the two original images, pre- and post-
contrast, very noticeable or conspicuous and the small contrast-laden vessels are easily seen.
They are said to have high conspicuity, rather like the spot-the-difference pictures in popular
magazines [23]..

26
Fig .17

[23]
This pixel shifting tends to be a trial-and-error process, involving a combination of shifts in
different directions and by differing amounts. Motion artifacts can be a significant problem in
cardiac studies, resulting from the involuntary motion of the soft tissues.

27
Fig .18

[23]
CT imaging:

CT imaging is the primary digital technique for imaging the chest, lungs, abdomen and bones
due to its ability to combine fast data acquisition and high resolution, and is ideally suited to
three-dimensional reconstruction. It is particularly useful in the detection of pulmonary (i.e.
lung) disease, because the lungs are difficult to image using ultrasound and MRI. It is often
used to diagnose diffuse diseases of the lung such as emphysema, which involves a sticky
build-up of mucus in the lungs, and cystic fibrosis, which leads to irreversible dilation of the
airways [23].

Fig .19

[23]

28
Activity 3.3 uses a “stack” of images of a brain showing hydrocephalus, in which excessive
accumulation of cerebrospinal fluid CSF results in an abnormal dilation of the ventricles
(spaces) in the brain, causing potentially harmful pressure on the brain tissues. The user can
move through the stack to identify the slices which show enlarged ventricles.

PET imaging:
Positron emission tomography, PET, is the most recent nuclear medicine imaging technique:
in common with the others, it measures physiological function (e.g. perfusion, metabolism),
rather than gross anatomy. A small, positron-emitting radioisotope with a short half-life (such
as carbon-11, 11C (about 20 min), nitrogen-13, 13N (about 10 min), oxygen-15, 15O (about
2 min), and fluorine-18, 18F (about 110 min)) is incorporated into a metabolically active
molecule (such as glucose, water or ammonia), and injected into the patient. Such labeled
compounds are known as radiotracers. When a positron, i.e. a positively charged electron, is
emitted within a patient, it travels up to several millimetres while losing its kinetic energy.
When the slowly moving positron encounters an electron, they spontaneously disappear and
their rest masses are converted into two 511 keV annihilation (gamma ray) photons, which
propagate away from the annihilation site in opposite directions [23]
.

Fig .20

The patient is surrounded by multiple rings of gamma photon detectors, so that no


detector rotation is required. Positron emission tomography, PET, is distinct from single-

29
photon emission computed tomography, SPECT, in that two γ-ray photons are produced at
the same time. The output of detectors on opposite sides of the PET scanner is analyzed by a
coincidence detector, which only counts events that are simultaneous to within a user-set time
window (2–20 ns); this ensures that only the 511 keV photons are counted. Simultaneous
triggering reveals the line of sight of the two photons, and the original positron-emitting
radiopharmaceutical must be somewhere along that line. The intersection of many such lines
delineates the distribution of the pharmaceutical. PET images (Fig. 3.39) have higher signal-
to-noise ratio and better spatial resolution (2mm) than planar scintigraphy and SPECT
images. However, PET systems are much more expensive. Cyclotrons are required to
produce the short-lived positron-emitting isotopes, due to their short half-lives. Few hospitals
and universities are capable of maintaining such systems, and most clinical PET is supported
by third-party suppliers of radiotracers which can supply many sites simultaneously. This
limitation restricts clinical PET primarily to the use of radiotracers labeled with fluorine-18
(T½ ≈ 110 minutes), which can be transported a reasonable distance before use, or to
rubidium-82 (T½ ≈ 75 seconds), which can be created in a portable generator and is used for
myocardial perfusion studies. To facilitate the process of correlating structural and functional
information, scanners that combine x-ray CT and radionuclide imaging, either SPECT or
PET, have been developed. These dual-modality systems use separate detectors for x-ray and
radionuclide imaging, with the detectors integrated on a common gantry. Because the two
scans can be performed in immediate sequence during the same session, with the patient not
changing position between the two types of scans, the two sets of images are more precisely
registered. In the fused image the radionuclide distribution can be displayed in color on a
gray-scale CT image to co-register the anatomical and physiological features and thereby
improve evaluation of disease [23].

Doppler imaging:
Blood velocity measurements are essential in calculating cardiac output and diagnosing
stenosis, narrowing, of the arteries. The Doppler effect can be used to determine blood
velocity and interlace this information with B-mode scanning, as a so-called duplex scan.
The Doppler effect is familiar in the form of the increased frequency of a moving
sound source, such as a train whistle or police siren, as it approaches, and the reduced
frequency, as it passes by. The relative change in frequency, Δf/f, depends on the velocity
of the sound emitter, V, relative to the speed of sound in air, Vs. Thus:

30
where the ± refers to the sound source traveling towards (+) or away from (−) the receiver.

Fig .21

Fig .22

Doppler measurements are usually displayed as a time series of spectral Doppler plots
(Fig. 22); there is no spatial (i.e. depth) information.
Ultrasound:
There are a wide range of applications of ultrasound imaging as a result of its non-
invasive, non-ionizing nature, and its ability to form real-time axial and three-dimensional
images.
The tissues of interest need to reflect sufficient ultrasound energy; this limits the
method to soft tissues, fluids and small calcifications preferably close to the surface of the
body and unobstructed by bony structures.
Ultrasound is most commonly employed in examinations of the abdomen and pelvis.

31
In obstetrics, fetal head size and fetal length are used as measures of fetal maturity and health,
while spinal morphology can be used to detect the presence of abnormalities such as spinal
bifida. Doppler imaging can be used to measure fetal blood velocity and cardiac function
[23].

Fig .23

[23]
Ultrasound imaging can be used to complement x-ray mammography in the diagnosis of
breast cancer (Fig. 23). It can help determine whether a lump is a fluid-filled cyst or a solid
mass, and is particularly useful in women with dense breast tissue and with young women,
because their tissue is relatively opaque to x-rays.

Magnetic resonance imaging


Magnetic resonance imaging (MRI) is a non-ionizing technique that uses
radiofrequency (200MHz–2 GHz) electromagnetic radiation and large magnetic fields
(around 1–2 tesla (T), compared with the Earth’s magnetic field of about 0.5 × 10–4 T). The
large magnetic fields are produced by superconducting magnets, in which current is passed
through coils of superconducting wire whose electrical resistance is virtually zero.
MRI images provide anatomical and physiological details, i.e. structure and function,
with full three-dimensional capabilities, excellent soft tissue visualization, and high spatial
resolution (~1mm). Like x-ray CT, it is a tomographic imaging modality. Image
reconstruction, while conceptually equivalent to that in CT, is obtained from the raw signals
collected in frequency space. With sufficient slice images, the image data is practically three-
dimensional and it is possible to reconstruct the data in different two-dimensional planes .
Scans last several minutes, rather than a few seconds as in x-ray CT, so that patient motion
can be a problem [23].

32
Furthermore, MRI scanners are several times as costly as a CT scanner because of the
expensive superconducting magnet required.

Nuclear magnetic resonance


MRI imaging is based on nuclear magnetic resonance (NMR). Nuclei are composed of
nucleons, either neutrons or protons. Nuclei with unpaired nucleons behave like small
magnets, with an associated magnetic moment. The hydrogen nucleus, a single proton, is of
particular importance in MRI imaging because of its abundance in biological tissue, and all
current MRI scanners use the proton signal [23].

33
5. Overview of implicit active contours
The implicit active contour, or level set, approach was introduced by Osher and Sethian11
and has since been enhanced by several authors.3{6 An easy-to-understand high-level
description of the level set method. The basic idea is to start with a closed curve in two
dimensions (or a surface in three dimensions) and allow the curve to move perpendicular to
itself at a prescribed speed. One way of describing this curve is by using an explicit
parametric form, which is the approach used in snakes. As mentioned earlier, this causes
difficulties when the curves have to undergo splitting or merging, during their evolution to
the desired shape. To address this difficulty, the implicit active contour approach, instead of
explicitly following the moving interface itself, takes the original interface and embeds it in
higher dimensional scalar function, defined over the entire image. The interface is now
represented implicitly as the zero-th level set (or contour) of this scalar function. Over the rest
of the image space, this level set function Ф is defined as the signed distance function from
the zero-th level set. Specially, given a closed curve C0, the function is zero if the pixel lies
on the curve itself, otherwise, it is the signed minimum distance from the pixel to the curve.
By convention, the distance is regarded as negative for pixels inside C0 and positive for
pixels outside C0. The function Ф, which varies with space and time (that is, Ф = Ф(x; y; t) in
two dimensions) is then evolved using a partial differential equation (PDE), containing terms
that are either hyperbolic or parabolic in nature. In order to illustrate the origins of this PDE,
we next consider the evolution of the function Ф as it evolves in a direction normal to itself
with a known speed F. Here, the normal is oriented with respect to an outside and an inside.
Since the evolving front is a zero level set (i.e., a contour with value 0) of this function, we
require (using a one-dimensional example) [24]
Ф(x(t),t) = 0

for any point x(t) on the zero level set at time t. Using the chain rule, we have
Ф + ΔФ( x(t), t) ¢ x(t) = 0

Since F is the speed in the direct n normal to the curve, we have


x(t) . n = F

Thus, the evolution of Ф can be written as


Ф + F ||ΔФ|| = 0

34
where ф(x, t = 0), that is, the curve at time t = 0, is given. This formulation enables us to
handle topological changes as the zero level set need not be a single curve, but can easily split
and/or merge as t advances. Last Equation can be solved using appropriate finite deference
approximations for the spatial and temporal derivatives13 and considering the image pixels to
be a discrete grid in the x ¡ y domain with uniform mesh spacing. In order to evolve the level
set, we need the specification of an initial closed curve(s), the initialization of the signed
distance function Ф over the rest of the image, the finite difference discretization of Eqn. (1),
and the prescription of the propagation speed F. We next discuss each of these issues in detail
[24].

5.1 Choice of the speed function

The speed F depends on many factors including the local properties of the curve, such as the
curvature, and the global properties, such as the shape and the position of the front. It can be
used to control the front in several different ways. The original level set method proposed
using F as the sum of two terms

F = F0 + F1(k)

Where F0 is a constant propagation term and F1 is a scalar function of the curvature k.

5.2 The placement of the initial contour

A key challenge in implicit active contours is the placement of the initial contour. Since the
contour moves either inward or outward, its initial placement will determine the segmentation
that is obtained. For example, if there is a single object in an image, an initial contour placed
outside the object and propagated inward will segment the outer boundary of the object.
However, if the object has a hole in the middle, it will not be possible to obtain the boundary
of this hole unless the initial contour is placed inside the hole and propagated outward. It
should be noted that more than one closed curve can be used for initialization of the zero-th
level set [24].

35
5.3 Calculation of the distance function

Once the initial contour has been determined, we need to calculate the signed distance
function Á, that is, the minimum distance from each pixel in the image to the prescribed
initial contour. This is done by solving the Eikonal equation. This is derived from the level
set formulation as follows. Suppose the speed function F is greater than zero. As the front
moves outward, one way to characterize the position of the front is to compute the arrival
time T(x; y) as it crosses each point (x; y). This arrival function is related to the speed by
|| ΔT || =1

where T = 0 is the initial contour. When the speed F depends only on the position, Equation)
is referred to as the Eikonal equation. The solution of this equation for a constant speed of
unity gives the distance function. The sign is attached to the function depending on the
location of each pixel relative to the original contour. In our work, we used the fast sweeping
method to solve the Eikonal equation as described by Zhao [24].

5.4 The discretization of the level set equation


In order to evolve the level set equation, Equation, must be solved on the discrete grid
represented by the image. Here, we borrow extensively from the work that has already been
done in the area of the solution of partial differential equations through techniques such as
finite difference methods.

The finite difference approach essentially considers the discretized version of the image
I(x,y) to correspond to the intensity at the pixels (I, j), at locations (xi , yj ), where i = 1. . . . .
. . . N; and j = 1. . . . .M. The distance between the centers of the pixels, referred to as the grid
spacing, is h. The same inter-pixel distance h is used along the x and y dimensions. Following
the approach of Sethian [24].

36
5.5 Level Set Method
Level set method is a way to represent active contour. For a given image u0, we use a level
set function φ to describe the desired contour. The φ function has the same size with image
u0, which means each pixel of image u0 will have a corresponding φ value [24]. We define
the region where φ=0 as the contour C, and the φ>0 region as inside the contour and the φ<0
region as outside the contour, as shown in Fig. 24.

Fig.24 : Use level set function to represent contour [24]

37
References
[1] B. Gold, C. M. Rader, Digital Processing of Signals, New York: McGraw-Hill, 1969.
[2]
https://www.rpi.edu/dept/phys/ScIT/InformationTransfer/sigtransfer/signalcharacteristics.html
[3] https://learn.sparkfun.com/tutorials/analog-vs-digital/all
[4] https://ieeexplore.ieee.org/document/826412
[5] Digital Signal Processing with Examples in MATLAB®, Second Edition
[6] J. S. Small, "General-Purpose Electronic Analog Computing: 1945–1965", IEEE Annals
of the History of Computing, vol. 15, no. 2, pp. 8-18, 1993.
[7] Α. V. Oppenheim, R. W. Schafer, Digital Signal Processing, New Jersey:Prentice-Hall,
1975
[8] A. Arbel, "Transistorized analogue to digital converter", The Nuclear Electronics
Conference Organized by the International Atomic Energy Agency, 1961-May
[9] S. Westermann, R.E. Sandlin, "Digital Signal Processing: Benefits and
Expectations", Supplement To The Hearing Review, vol. 2, pp. 56-59, 1997
[10] R. H. Walden, "Analog-to-digital converter survey and analysis", IEEE J. Select. Areas
Commun., vol. 17, no. 4, pp. 539-550, Apr. 1999.
[11] https://www.polytechnichub.com/mean-adc-analog-digital-converter/
[12] D. S. Ruchkin, "Linear reconstruction of quantized and sampled random signals", IRE
Trans. Communications Systems, vol. CS-9, pp. 350-355, December 1961.
[13] https://www.slideserve.com/octavia/survey-of-quantization
[14] W. R. Bennett, "Spectrum of quantized signals", Bell Syst. Tech. J., vol. 27, pp. 446-
472, July 1948.
[15] C. E. Shannon, "Coding theorems for a discrete source with a fidelity criterion" in
Information and Decision Processes, New York:McGraw-Hill, pp. 93, 1960.
[16]
https://www.tutorialspoint.com/linear_integrated_circuits_applications/linear_integrated_circu
its_applications_digital_to_analog_converters.htm
[17] http://dspforyou.blogspot.com/2012/08/image-processing-types-of-images.html
[18] https://www.mathsisfun.com/data/analog-digital.html
[19] digital image processing - rafael c. gonzalez and richard e. woods, addison wesley
1993.
[20] F. Meyer, An overview of morphological segmentation. International Journal of Pattern
Recognition and Artificial Intelligence, vol. 15, no. 7, pp. 1089-1118, 2001.
[21] https://www.living-democracy.rs/en/textbooks/volume-1/part-2/unit-4/chapter-2/lesson-1-
2/
[22] INTRODUCTION TO ROBOTICS, ANALYSIS, SYSTEMS, APPLICATIONS - SAEED
B. NIKU
[23] https://typeset.io/formats/ieee/ieee-transactions-on-medical-
imaging/333e3b8eea8c685d9bfd378530d469ff
[24] A. Levinshtein, C. Sminchisescu, S. J. Dickinson, "Optimal Contour Closure by
Superpixel Grouping," In: Proceeding of the 11th European Conference on Computer Vision.
Heraklion, Crete, Greece, 2010. Part 2. pp. 480-493

38

Potrebbero piacerti anche